As the use of generative AI tools become commonplace, Avivah Litan, Distinguished VP Analyst at Gartner looks at the key steps that organisations can take to help protect their sensitive data.
In the dynamic landscape of artificial intelligence (AI), the surge in interest surrounding generative AI tools like ChatGPT and innovations like Microsoft 365 Copilot is palpable. While these technologies promise transformative potential, they also raise important concerns about safeguarding sensitive data.
CIOs and IT leaders should look to employ the following steps to mitigate sensitive data risks associated with ChatGPT.
1. Ensuring Data Integrity with Advanced Security Measures
The adoption of generative AI introduces the challenge of preventing sensitive data from inadvertently entering AI systems. To combat this, leverage existing security tools such as a security service edge (SSE) solution, incorporating the SSE's capability to mask, redact, or block sensitive data inputs ensuring data integrity at the point of interaction.
In this context, the vigilant use of the block option can be functional, effectively thwarting sensitive data entry through web interfaces and APIs. This proactive approach is crucial in maintaining the confidentiality of sensitive information and ensuring a consistent approach is adhered to throughout the organisation.
2. Meticulous Analysis of the Data Security Protocols Underpinning Tools
The advent of commercial off-the-shelf (COTS) generative AI solutions (such as Microsoft 365 Copilot) presents an enticing prospect for content creation. Nevertheless, it’s essential to meticulously analyse the data security protocols underpinning these tools.
Organisations planning their adoption strategy for these tools should consider their data type. The COTS generative AI service can be embraced in public data scenarios to boost innovation and productivity. When dealing with proprietary or customer data, a meticulous assessment of data security, compliance, and privacy measures are essential to prevent inadvertent compromises.
For highly sensitive data, where stringent privacy is paramount, integrating these tools requires a fortified approach within existing data governance and access control frameworks, ensuring compliance with rigorous privacy standards. Tailor the adoption strategy to the data's nature enables organisations to harness the power of generative AI while safeguarding data integrity and adhering to regulatory obligations.
3. Custom Solutions for Optimal Data Protection
For organisations seeking maximum control over data protection, creating tailored generative AI applications using foundational models emerges as a strategic choice. Microsoft's Azure OpenAI Service is a pivotal platform for developing GPT-based applications, especially those dealing with proprietary data.
This approach empowers organisations to engineer applications that align precisely with their unique data security requisites. As the responsibility for application security falls on the customer, Azure OpenAI Service offers a versatile canvas for innovation.
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
Organisations with deep learning proficiency, substantial computational resources, and dedicated budgets can contemplate training domain-specific large language models (LLMs) using proprietary data. This approach, exemplified by BloombergGPT, yields unparalleled control over sensitive data protection. By training LLMs from scratch, organisations can design AI models that adhere closely to their data security parameters, constructing robust defences against data leakage.
As generative AI ascends to the forefront of technological progress, it brings forth both promise and peril. Through a meticulously crafted roadmap, security professionals are equipped to traverse the landscape of ChatGPT and generative AI while safeguarding the sanctity of sensitive data. In this symbiotic relationship between innovation and protection, organisations can realise AI's potential without compromising their data's integrity.
About the author
Avivah Litan, Distinguished VP Analyst in Gartner Research. Avivah is currently a member of the ITL AI team that covers AI and Blockchain and a lead Gartner analyst specialising in AI trust, risk and security management.
Gartner analysts will explore how leaders must structure their AI operating models at the Gartner Security & Risk Management Summit, taking place from 26 – 28 September 2023 in London.