As the use of generative AI increases, organizations are revisiting their internal policies and procedures to ensure responsible, legal, and ethical employee use of these novel tools. The Future of Privacy Forum consulted over 30 cross-sector practitioners and experts in law,
technology, and policy to understand the most pressing issues and how experts are accounting for generative AI tools in policy and training guidance. FPF’s Internal Policy Checklist is intended as a starting point for the development of organizational generative AI policies, highlighting four areas in which organizations should develop and/or assess internal policies. The full checklist includes additional detail and guidance.
Use in Compliance with Existing Laws and Policies for Data Protection and Security
Designated teams or individuals should revisit internal policies and procedures to ensure that they account for planned or permitted uses of generative AI. Employees must understand that relevant current or pending legal obligations apply to the use of new tools.
Employee Training and Education
Identified personnel should inform employees of the implications and consequences of using generative AI tools in the workplace, including providing training and resources on responsible use, risk, ethics, and bias. Designated leads should provide employees with regular reminders of legal, regulatory, and ethical obligations.
Employee Use Disclosure
Organizations should provide employees with clear guidance on when and whether to use organizational accounts for generative AI tools, as well as policies regarding permitted and prohibited uses of those tools in the workplace. Designated leads should communicate norms around documenting use and disclosing when generative AI tools are used.
Outputs of Generative AI
Systems should be implemented to remind employees to verify outputs of generative AI, including for issues regarding accuracy, timeliness, bias, or possible infringement of intellectual property rights. Organizations should determine whether and to what extent compensation should be provided to those whose intellectual property is implicated by generative AI outputs. When generative AI is used for coding, appropriate personnel should check and validate outputs for security vulnerabilities.
Views: 5