Source: securityboulevard.com – Author: Sonya Duffin
If you use generative artificial intelligence (GenAI) tools at work, be warned: Half (53%) of your colleagues think you’re cheating, and one in four (23%) would dock your pay for it.
Those are just a few of the findings of a recent global survey of 11,500 office workers on the use of GenAI and attitudes toward the technology everyone is talking about. The survey, conducted by market research firm 3Gen on behalf of Veritas, finds more than 70% of respondents admit to using GenAI tools on the job. One in three admit to sharing customer information, employee details and financial data with the platforms – risky practices that could violate data privacy regulations.
The report also reveals a concerning lack of policies and guidelines regarding the workplace use of GenAI tools such as Open AI’s ChatGPT, Microsoft Copilot and Anthropic’s Claude. This absence of clear direction creates divisions among employees, increases the risk of exposing sensitive company data and causes many employees to miss out on productivity gains.
Employees crave rules around GenAI from their employers, with half (51%) wanting either mandatory policies or voluntary guidelines. Another one in four (26%) want their employer to provide training, saying they need to know how to use GenAI “in the right way” (68%). Despite this, only a third (36%) said their employer currently has policies governing when and how GenAI can be used.
The Risks of Unfettered GenAI Access
And the risks are real. While more than 30% of office workers confess to having entered potentially sensitive data like customer records, human resources info and company financials into a GenAI tool, shockingly, nearly two-thirds (61%) fail to recognize that this could publicly expose private information, and slightly more (63%) admit that they don’t understand the compliance implications.
Improper use of GenAI systems can pose significant risks and cause potential damage to businesses and employees alike, including reputational damage, intellectual property infringement, data privacy breaches, ethical concerns and compliance issues. For employees, misuse of GenAI can lead to disciplinary action, legal liability, personal reputational damage and mental health concerns.
On the flip side, 28% of staff do not use AI tools at all. Half (53%) of this group feel at a professional disadvantage for not leveraging the technology. And with good reason: Gen AI users report faster access to information (48%) and increased productivity (40%), while others cite benefits including task automation (39%) and idea generation (34%).
Training is Essential for Safe Use
To mitigate these risks, businesses must implement robust governance frameworks, ethical guidelines, comprehensive training programs and continual monitoring for the responsible use of GenAI technologies. Employees should be aware of the potential consequences of misuse and the importance of adhering to established policies and ethical principles.
Establishing a clear governance structure is crucial. Organizations should form dedicated teams or committees to oversee the development, deployment and monitoring of GenAI systems. These teams should comprise experts from various domains, including technology, legal, ethics and domain-specific subject matter experts, to ensure a well-rounded approach to managing risks.
Ethical guidelines should be developed in alignment with the organization’s values and principles. These guidelines should address issues such as data privacy, intellectual property rights, content moderation and the responsible development and use of GenAI technologies. Employees should receive comprehensive training on these guidelines, and their adherence should be regularly assessed and reinforced.
Training programs are essential for raising awareness and equipping employees with the knowledge and skills required for the responsible use of GenAI. These programs should cover data handling, model governance, bias mitigation and ethical considerations. Ongoing training and refresher courses should be provided to ensure that employees remain up to date with the latest developments and best practices.
Furthermore, businesses should implement robust monitoring and auditing mechanisms to detect and address potential misuse or unintended consequences of GenAI systems. Regular risk assessments and impact evaluations should be conducted to identify and mitigate emerging risks proactively.
GenAI tools are here to stay. They provide great opportunities to help employees be more efficient with their daily work. And though it’s being embraced by employees, it’s important to note that to make the most of what it can offer, businesses should embrace it too.
Our message is clear: Thoughtfully developed and well-communicated guidelines on the appropriate use of generative AI, combined with the right data compliance and governance toolset, is essential for businesses looking to stay ahead. Your employees will thank you and your organization can enjoy the benefits without increasing risk.
Original Post URL: https://securityboulevard.com/2024/05/risks-of-genai-rising-as-employees-remain-divided-about-its-use-in-the-workplace/
Category & Tags: AI and Machine Learning in Security,AI and ML in Security,Cybersecurity,Data Privacy,Data Security,Deep Fake and Other Social Engineering Tactics,Identity & Access,Network Security,Security Awareness,Security Boulevard (Original),Social – Facebook,Social – X,Social Engineering,Threat Intelligence,Threats & Breaches,ai ethics,Artificial Intelligence,data resilience,GenAI,governance,training – AI and Machine Learning in Security,AI and ML in Security,Cybersecurity,Data Privacy,Data Security,Deep Fake and Other Social Engineering Tactics,Identity & Access,Network Security,Security Awareness,Security Boulevard (Original),Social – Facebook,Social – X,Social Engineering,Threat Intelligence,Threats & Breaches,ai ethics,Artificial Intelligence,data resilience,GenAI,governance,training
Views: 0