New Cisco Study Highlights the Impact of Data Security and Privacy Concerns on GenAI Adoption
The Cisco 2024 Data Privacy Benchmark Study has revealed that more than 1 in 4 organizations (27 percent) banned the use of GenAI over privacy and data security risks. While 2023 was a breakout year for GenAI, the Cisco study shows that a significant percentage of organizations have trust issues with GenAI.
The seventh edition of the Cisco benchmark study was compiled using data from over 2,600 privacy and security professionals from around the globe. More than 90 percent of the respondents believed that GenAI needs more advanced techniques to manage data and risk.
The top concern of the respondents included the threat to their company’s legal and intellectual property rights (69 percent), and the risk of disclosure of sensitive information to the competitors or the public (68 percent).
Organizations that have not banned GenAI altogether are aware of the risks. Sixty-three percent have set limitations on what data can be entered and 61 percent have set rules on which GenAI tools can be used by the employees.
Customers are also concerned about AI use involving their data. The Cisco study also shows that 91 percent of organizations recognize the concerns of the customers, and admit that they need to do more to reassure their customers. However, the customer confidence is similar to the levels in Cisco’s last year report, which means not much progress has been made.
“LLMs (GenAI) are evolving enterprise digital transformation efforts and introducing new data privacy precautions that need to be addressed early on. Despite the potential GenAI can have, there have already been complaints about the unintended data privacy exposure such as personal confidential information. Companies must implement data privacy by design to streamline knowledge management, gain expected efficiencies and safely benefit from GenAI innovation.” said Ravi Srinivasan, CEO, Votiro.
The importance of data security and privacy is evident in the increase in privacy spending by organizations. Over the last five years, privacy spending has more than doubled. A high percentage (95 percent) of respondents indicated that privacy benefits have started to yield results with the average organization getting benefits 1.6 times their spending.
“94% of respondents said their customers would not buy from them if they did not adequately protect data,” explains Harvey Jang, Cisco Vice President and Chief Privacy Officer. External certification and laws can play a key role in reassuring customers that their data is safe as this provides some hard evidence that organizations can be trusted.
A recently released report on GenAI’s impact on the software delivery lifecycle by LinearB, a leader in software delivery management solutions, also shows that security is a chief concern, followed by compliance and quality.
The concerns drop sharply across the board as GenAI adoption grows. This means the organizations that are hesitant to deploy GenAI are statistically more likely to not trust the technology. However, after they reach a higher adoption phase, some of their concerns are alleviated.
Last year, a Gartner survey also showed similar trends in terms of GenAI risks. Fifty-seven percent of respondents said they are concerned about leaked secrets in AI-generated code and 58 had concerns about incorrect or biased outputs.
While 93 percent of IT and security leaders said they are involved in the organization GenAI security, only 24 percent percent said they own this responsibility.
The report also highlighted some of the tools used by organizations to address risks related to GenAI. The most popular tools included AI application security, ModelOps, and privacy-enhancing technologies (PETs).
Several more studies have shown that despite the surge in GenAI demand, organizations remain cautious about GenAI deployment. The looming threat of stricter compliance is not helping. However, as organizations move through the adoption phase, their confidence should grow.