EntrepreneurshipLegal insightTechnology

What Entrepreneurs Should Know About AI and Cybersecurity

By Alex Lanstein, CTO, StrikeReady

There’s no doubt that artificial intelligence (AI) has made it easier and faster to do business. The speed that AI enables for product development is certainly significant—and it cannot be understated how important this is, whether you’re designing the prototype of a new product or the website to sell it on. 

Similarly, Large Language Models (LLMs) like OpenAI’s ChatGPT and Google’s Gemini have revolutionized the way people do business, to quickly create or analyze large amounts of text. However, since LLMs are the shiny, new toy that professionals are using, they may not recognize the downsides that make their information less secure. This makes AI a mixed bag of risk and opportunity that every business owner should consider.  

Access Issues

Every business owner understands the importance of data protection, and an organization’s security team will put controls in place to ensure employees don’t have access to information they’re not supposed to. But despite being well-aware of these permission structures, many people don’t apply these principles to their use of LLMs.

Generally, people who use AI tools don’t understand exactly where the information they’re feeding into them may be going. Even cybersecurity experts—who otherwise know better than anyone the risks that are caused by loose data controls—can be guilty of this. Oftentimes, they’re feeding security alert data or incident response reports into systems like ChatGPT willy-nilly, not thinking about what happens to the information after they’ve received the summary or analysis they wanted to generate.

However, the fact is, there are people actively looking at the information you submit to publicly hosted models. Whether they’re part of the anti-abuse department or working to refine the AI models, your information is subject to human eyeballs and people in a myriad of countries may be able to see your business-critical documents. Even giving feedback to prompt responses can trigger information being used in ways that you didn’t anticipate or intend. The simple act of giving a thumbs up or down in response to a prompt result can lead to someone you don’t know accessing your data and there’s absolutely nothing you can do about it. This makes it important to understand that the confidential business data you feed into LLMs are being reviewed by unknown people who may be copying and pasting all of it.

The Dangers of Uncited Information

Despite the tremendous amount of information that’s fed into AI daily, the technology still has a trustworthiness problem. LLMs tend to hallucinate—make up information from whole cloth—when responding to prompts. This makes it a dicey proposition for users to become reliant on the technology when doing research. A recent, highly-publicized cautionary tale occurred when the personal injury law firm Morgan & Morgan cited eight fictitious cases, which were the product of AI hallucinations, in a lawsuit. Consequently, a federal judge in Wyoming has threatened to slap sanctions on the two attorneys who got too comfortable relying on LLM output for legal research.

Similarly, when AI isn’t making up information, it may be providing information that is not properly attributed—thus creating copyright conundrums. Anyone’s copyrighted material may be used by others without their knowledge—let alone their permission—which can put all LLM enthusiasts at risk of unwittingly being a copyright infringer, or the one whose copyright has been infringed. For example, Thomson Reuters won a copyright lawsuit against Ross Intelligence, a legal AI startup, over its use of content from Westlaw. 

The bottom line is, you want to know where your content is going—and where it’s coming from. If an organization is relying on AI for content and there is a costly error, it may be impossible to know if the mistake was made by an LLM hallucination, or the human being who used the technology. 

Lower Barriers to Entry

Despite the challenges AI may create in business, the technology has also created a great deal of opportunity. There are no real veterans in this space—so someone fresh out of college is not at a disadvantage compared to anyone else. Although there can be a vast skill gap with other types of technology that significantly raise barriers to entry, with generative AI, there’s not a huge hindrance to its use.

As a result, you may be able to more easily incorporate junior employees with promise into certain business activities. Since all employees are on a comparable level on the AI playing field, everyone in an organization can leverage the technology for their respective jobs. This adds to the promise of AI and LLMs for entrepreneurs. Although there are some clear challenges that businesses need to navigate, the benefits of the technology far outweigh the risks. Understanding these possible shortfalls can help you successfully take advantage of AI so you don’t end up getting left behind the competition.

About the Author:

Alex Lanstein is CTO of StrikeReady, an AI-powered security command center solution.  Alex is an author,  researcher and expert in cybersecurity, and has successfully fought some of the world’s most pernicious botnets:  Rustock, Srizbi and Mega-D.  

Tags