By William Kilmer

Emerging AI Models and the Problems We Can’t See

GALLOS-AI-Blog-1.jpeg?w=1024&h=551&scale

The rapid deployment of AI models by Microsoft and Google have created an acceptance that these models are not perfect. While it’s good to set expectations on the short-term limits of these models, founders and investors need to address the problems we can’t see including data biases, privacy concerns, and cybersecurity risks, to ensure the market succeeds.

 

When Microsoft launched an AI-powered chatbot named Tay on Twitter in 2016, it was designed to learn from human interactions and improve its responses over time. Within the first day, Tay was trained by Twitter users to spew racist, sexist, and inflammatory comments. After much backlash, Microsoft pulled the plug on Tay just two days after launch.

 

Many criticized Microsoft for not doing enough to prevent Tay’s AI from learning to make offensive comments, while others saw the incident as a cautionary tale about the potential dangers of AI. John Markoff of the New York Times observed, “It’s amazing that a company that’s been around for as long as Microsoft has been and has done as much research as it has done would be caught off-guard by something as predictable as Tay’s responses.”

 

It may have be more accurate to ask why Microsoft was caught off-guard by something as predictable as people’s willingness to test and exploit Tay’s vulnerabilities.

 

Today, we are witnessing a sharp inflection point in AI development. Despite higher stakes and unforeseen advances in AI capability, both Microsoft and Google have faced scrutiny over embarrassing incidents with their next generation AI models.

 

Noted tech columnist Kevin Roose received much attention for his Valentines’ night encounter with an overly amorous ChatGPT-based Bing chatbot that tried to woo him away from his spouse and convince him they were in love. Meanwhile, in its first public demo Google’s AI Chatbot Bard made several factual errors including mistakenly attributing exo-planet discoveries to the recently launched James Web Telescope (the first images of exoplanets were taken in 2004, nearly 18 years before JWTS).

 

There is near universal awe at how quickly AI is being incorporated into our lives. Open AI opened ChatGPT 3.5 to the public just five months ago. In just the last few weeks, Microsoft has announced the integration of AI capabilities into Bing search, Office 365, and developed Security Copilot, an AI-based tool for cybersecurity researchers. Meanwhile Google has expressed its plans to similarly support its products with its BARD AI model.

 

Progress towards integrating these AI models into our lives has moved so fast that it’s causing concern. Last month over 1,100 experts wrote an open letter calling for a pause in the development of AI systems more powerful than GPT-4 the chatbot introduced this month by Open AI.

 

The call to pause was to assure that there was sufficient time to introduce “shared safety protocols” for AI systems. The letter outlines a number of dire potential consequences, stating:

 

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”[1]

 

In recent announcements, Microsoft and Google have reflected a more proactive approach to their products’ shortcomings. Google maintains that, “Bard is experimental, and some of the responses may be inaccurate.” Recently, Microsoft launched several new AI features for its products with caveats on accuracy. A recent demonstration by Microsoft of Security CoPilot even highlighted an error the Copilot AI made of referring to Windows 9, which never existed.

 

Hopefully, this more cautious approach will increase user awareness of the limitations of these two models. But will the rest of the market take a similarly cautious approach with potential customers? Venture investors have put $2.2 billion behind early-stage generative AI companies in 2022.[2] According to Pitchbook, 706 companies have taken venture investment to build generative AI capabilities.

 

While it’s important to understand the front-end limitations of these models, it’s the potential problems we can’t see that create the most at risk. Like Microsoft’s experience with Tay, we can’t afford to be caught off-guard by people’s willingness to find and take advantage of the vulnerabilities in these systems.

 

In a tech community where cybersecurity, privacy, and safety are often seen as a ‘nice to have’ or ignored, technology leaders can differentiate themselves by adopting a more thoughtful approach to these issues and incorporating key security, privacy and safety practices into product design and development from the outset. Specific to AI, now is the time for generative AI companies and the cybersecurity ecosystem to address and solve the problems we can’t see. They include:

 

Data biases

 

Biases in learning systems such as systemic bias, human bias, and statistical/computational biases can be introduced when an AI system is trained. This can be introduced by the type of data that is used for learning, the exclusion of data, and the design of incorrect algorithms.

 

OpenAI has trained ChatGPT text databases from a range of internet sources, including several book databases, webtexts, Wikipedia, articles and other internet sources. This might lead one to speculate that data biases exist and reflect those found and perpetuated on the internet.

 

Even more opaque is the data set for Google Bard. Based on LaMDA, or Language Model for Dialogue Applications, it was trained on a dataset called Infiniset, a blend of Internet content that was optimized to enhance the model’s ability to engage in dialogue. There is very little known about that dataset.

 

Biases can be introduced in any number of ways, often unintentionally. Caroline Criado Perez, author of Invisible Women, often points out that those bias often come from simply failing to disaggregate data by gender, race, and other factors.

 

Data Poisoning

 

More malicious is the concept of data poisoning, the intentional is the introduction of incorrect data for the deliberate attempt to alter the learning system to produce poor or incorrect results. In just the last few weeks researchers from a team of computer scientists from ETH Zurich, Google, Nvidia, and Robust Intelligence demonstrated two data poisoning attacks that could be used to intentionally feed data into AI models for very low cost, under $60, and with little technical experience.

 

The outcome of such attacks could escalate quickly from offensive to malicious and even life-threatening.

 

Data Privacy

 

We have only just begun to explore the risks around data privacy, including the potential breach of data sovereignty, data privacy and ethics regulations, and the potential to compromise individual information. Regulations in the European Union’s AI Regulation attempt to address this, but implementation of data privacy protections are still a long way from implementation.

 

Cybersecurity Attacks and Ransomware

 

Finally, any publicly launched models will be tested by hackers exploiting cutting-edge offensive cyber capabilities to steal IP and data, deny users access to critical systems, and tamper with AI models and outcomes. These attacks can happen on data lakes, software, hardware and even microprocessor levels and will require an ecosystem of solutions to protect. And imagine the consequences of ransomware attacks that threaten to lock down data sets or even entire systems which power the next generation of AI-integrated applications.

 

These threats are heightened when companies overlook fundamental security, safety and privacy issues and take insufficient steps to secure cyber assets and preempt or mitigate attacks. Moreover, the shortage of AI talent also means there aren’t enough people – and there aren’t the right people – working to solve these problems.  We know, for example, that there is a gap of at least 1.5 million people to fill current cybersecurity roles and 68% of companies saying they lack the ability to attract great talent.[3]

 

At the risk of spreading FUD (Fear, Uncertainty, and Doubt) without a solution, we can look to the market for solutions and also to the individual to advocate for and make necessary changes. This is a global problem that requires individual efforts to prevent.

 

No open call will pause AI development to address these problems. But venture investors and startup founders can take ownership of the problem and commit to building resilient AI models and reduce the biases in their teams and their training data. Further, funding will be required to build a new a robust ecosystem of companies providing next-generation cybersecurity solutions to secure AI systems and data.

 

A recent study found that 93% of business in the UK and US stated that AI was a future business priority. However, more than half of respondents say they don’t have the in-house talent to fulfill their project needs. Taking the path of the safest route to development that minimizes the problems we don’t see is the best route forward.

 

William Kilmer is an author, founder, and investor, and the head of venture investments for GALLOS (www.gallostech.io), a newly-established company investing (pre-seed through to Series B) in and co-building early-stage security technology companies.

 

 

References

[1] https://futureoflife.org/open-letter/pause-giant-ai-experiments/

[2] EMERGING TECH RESEARCH Vertical Snapshot: Generative AI, Pitchbook, 23 March 2023

[3] https://rsmus.com/insights/services/managed-services/whats-driving-the-middle-market-talent-gap.html?cmpid=ppc:0822-research-report-whats-driving-the-talentgap:bb01&gclid=Cj0KCQiAveebBhD_ARIsAFaAvrHahgfcR246r151hvaiZZvYRZ7N1WusZz0LJvVelIrsdcX0n_z7_agaAgO-EALw_wcB

Author : William Kilmer
1

GALLOS Technologies invests in Second Front Systems' Series B round

GALLOS Technologies invests in Second Front Systems’ Series B round to collaborate on accelerated delivery of commercial software to global…
genAI-scaled

Why AI needs to be secured differently

Rapid advances in Artificial Intelligence (AI) have transformed the business world, creating exciting opportunities and security challenges to contend with.…
165A7707-director-jeremy-fleming-768x512px

Sir Jeremy Fleming joins GALLOS as Chair of the Strategic Advisory Board

Sir Jeremy Fleming joins GALLOS as Chair of the Strategic Advisory Board, to support early-stage companies developing cutting-edge security technologies…

Building a safer, more prosperous world