CEO of WitnessAI, draws on his experience at Palo Alto Networks to address the challenges of securing AI models in enterprise environments.
With generative AI becoming a critical tool for boosting productivity, the demand for robust controls to manage its risks is surging. Despite the enthusiasm, a Riskonnect survey reveals that only 9% of companies feel prepared to handle threats related to privacy and intellectual property from generative AI usage.
Privacy and governance issues were among the top barriers cited. Common AI governance issues include:
- Lack of visibility – With new AI chatbots and projects appearing on the web each day, IT organizations are often unable to track which AI systems their employees are accessing, and what employees are doing with those systems. An internal WitnessAI survey indicated that nearly 90% of CISOs struggle to get a clear picture of employee AI usage.
- Lack of control – AI presents a new set of privacy and compliance challenges, such as preventing LLM training data from one client from being used to serve a different client, ensuring that employees can’t illegally access customer data within a private LLM, or blocking company IP from being shared with a public LLM such as ChatGPT.
- Lack of protection – LLMs create a new attack surface, putting enterprises at risk of data or financial loss. Prompt injection, jailbreaking and hallucination are a few of the common security risks from LLMs in use today.
The Balance of Power and Control in AI Security
Caccia likens securing powerful AI models to driving a sports car: having advanced capabilities is futile without effective controls, such as good brakes and steering. This analogy underscores the importance of not just the AI’s power but also the controls that ensure its safe application, especially in the enterprise sector.
WitnessAI’s platform offers a crucial service by acting as an intermediary between employees and their company’s AI models—not just those behind APIs like OpenAI’s GPT-4 but also more open systems like Meta’s Llama 3. It enforces policies that mitigate risks, including preventing unauthorized access to sensitive data and ensuring that AI tools are used appropriately.
Tackling Generative AI Risks with Innovative Solutions
The platform offers modules designed to address various aspects of AI risk. These include preventing employees from misusing AI tools—for instance, querying about sensitive pre-release financial reports or exposing internal codebases. Another module redacts proprietary information from prompts sent to AI models and employs techniques to guard against manipulative attacks that could lead models to produce unintended outputs.
“We aim to define the problem of safe AI adoption in a way that resonates with enterprises, and then provide a solution that tackles these specific challenges,” Caccia explains. His approach not only aims to protect businesses but also aligns with the needs of chief information security officers (CISOs) and chief privacy officers (CPOs) who are tasked with adhering to existing and forthcoming regulations.
Despite the benefits, WitnessAI’s approach involves intercepting all data before it reaches an AI model, which raises privacy concerns. The company addresses these worries by isolating and encrypting customer data within separate instances of their platform, ensuring that each customer’s AI activity data remains private and inaccessible to others.
However, the platform’s capability to monitor employee interactions with AI models poses potential issues related to workplace surveillance. While Caccia and his team assert the security and privacy benefits of their system, the notion of monitoring can affect employee morale and trust within a company.
Future Prospects and Industry Impact
WitnessAI’s early success in attracting interest from corporate users and venture capital—including a significant $27.5 million funding round led by Ballistic Ventures and GV—signals strong market potential. The company plans to expand its team significantly by the end of the year and continues to develop its offerings to meet the evolving needs of model compliance and governance in the AI space.
As generative AI continues to integrate deeper into business processes, the solutions provided by companies like WitnessAI will be crucial in navigating the complexities of AI implementation while ensuring compliance, security, and privacy within the enterprise.