Principles for Responsible Use of AI Everyone Should Know

The competitive nature of AI development poses a dilemma for organizations, as prioritizing speed can lead to ignoring ethics, bias detection, and safety measures.

 

Principles for Responsible Use of AI Everyone Should Know Picture 1

Known and emerging concerns around AI in the workplace include the spread of misinformation, copyright and intellectual property issues, cybersecurity, data privacy, and navigating fast-paced and ambiguous regulations. To mitigate these risks, you may want to consider the following principles for responsible use of AI in the workplace.

The Importance of Responsible Use of AI

Love it or hate it, the rapid expansion of AI isn't going to slow down anytime soon. But AI mistakes can quickly tarnish a brand's reputation — Microsoft's first chatbot, Tay, is a prime example. In the tech race, all leaders fear falling behind if they fall behind their competitors. It's a high-stakes situation where collaboration can easily create a clear disparity.

 

Leaders who prioritize speed to market are fueling the current AI 'arms race,' in which large companies rush to produce products and potentially overlook important considerations like ethical guidelines, bias detection, and safety measures. For example, some large tech companies are cutting back on their responsible AI teams at precisely the time when responsible action is most needed.

It's also important to realize that the AI ​​arms race extends beyond the big language model (LLM) developers like OpenAI , Google, and Meta. It includes many companies that use LLM to power their own custom applications. In the professional services world, for example, PwC announced that it is deploying AI chatbots to its 4,000 lawyers spread across 100 countries. These AI-powered assistants will ' help lawyers with contract analysis, regulatory compliance work, due diligence, and other legal advisory and consulting services .' PwC management is also considering expanding these AI chatbots into its tax practice. In total, the consulting giant plans to invest $1 billion in 'generative AI' — a powerful new tool that could potentially deliver a disruptive boost to performance.

Risks of Using AI in the Workplace

Unfortunately, staying competitive also poses significant risks for both workers and employers. For example, a 2022 UNESCO publication on ' the impact of AI on women's working lives ' reports that AI in the hiring process, for example, is excluding women from promotions. A study the report cites that included 21 experiments with more than 60,000 targeted job ads found that ' setting a user's gender to 'Female' resulted in fewer instances of ads related to higher-paying jobs than when users selected 'Male' as their gender .' And while AI bias in recruiting is clear, it's not going away anytime soon. As the UNESCO report notes, 'It's often a matter of data bias that will continue to infect AI tools and threaten key workforce elements like diversity, equity, and inclusion.'

 

Principles for Responsible Use of AI Everyone Should Know Picture 2

Principles for Responsible AI in the Workplace

To help decision makers avoid negative outcomes while remaining competitive in the age of AI, here are some principles for a sustainable AI-powered workforce. These principles combine ethical frameworks from organizations such as the National Science Foundation as well as legal requirements related to employee surveillance and data privacy such as the Electronic Communications Privacy Act and the California Privacy Rights Act. Steps to ensure responsible AI in the workplace include:

Principles for Responsible Use of AI Everyone Should Know Picture 3

  1. Informed consent. Obtain voluntary and informed consent from employees to participate in any AI-assisted intervention after they have been provided with all relevant information about the initiative. This includes the purpose, process, and potential risks and benefits.
  2. Benefits aligned. The objectives, risks and benefits for both employer and employee are clearly stated and aligned.
  3. Easy opt-in and opt-out. Employees should be able to opt-in to AI-powered programs without feeling pressured or coerced, and they should be able to easily opt-out at any time without any negative consequences and without explanation.
  4. Conversational transparency. When using an AI-based conversational agent, the agent must explicitly disclose any persuasive goals the system wants to achieve through its dialogue with the agent.
  5. Explainable and unbiased AI. Clearly outline steps to eliminate, mitigate, and mitigate bias in AI-powered human interventions, especially for marginalized and vulnerable groups, and provide transparent explanations of how AI systems reach their decisions and actions.
  6. AI training and development. Provide ongoing training and development for employees to ensure safe and responsible use of AI-enabled tools.
    Health and Wellbeing. Identify the types of stress, discomfort, or harm caused by AI and outline steps to mitigate the risks (e.g., how employers will mitigate stress caused by continuous AI monitoring of employee behavior).
  7. Data collection. Identify what data will be collected, whether the data collection involves any intrusive or uncomfortable procedures (e.g., using a webcam while working from home), and what steps will be taken to mitigate the risks.
  8. Data Sharing. Disclose any intentions to share personal data, with whom and why.
  9. Privacy and Security. Outline protocols for maintaining privacy, securely storing employee data, and steps that will be taken in the event of a privacy breach.
  10. Third-party disclosures. Disclose all third parties used to provide and maintain the AI ​​asset, what the third party's role is, and how the third party will ensure employee privacy.
  11. Communicate. Notify employees of changes to data collection, data management, or data sharing, as well as any changes to AI assets or third-party relationships.
  12. Laws and Regulations. Demonstrate ongoing commitment to comply with all laws and regulations related to employee data and use of AI.
4 ★ | 1 Vote

May be interested