Advancing AI Capabilities with Responsibility

Advancing AI Capabilities with Responsibility

What is AI

AI, or Artificial Intelligence, is significantly transforming our work and daily lives. According to Coursera.org, AI refers to the creation of computer systems capable of performing tasks that historically only a human could do, such as reasoning, making decisions, or solving problems. AI encompasses a range of technologies, including machine learning, deep learning, and natural language processing (NLP). These technologies enable machines to perform tasks that previously only humans could do, like generating written content, steering a car, or analyzing data. It allows computers to think or act more human by processing information that is provided or gathered and deciding its response based on what it knows or learns.

AI Applications

By harnessing the power of AI, experts can amplify their abilities, while leaving mundane and tedious (sometimes complicated) tasks to machines. Imagine a world where traffic lights change seamlessly, self-driving cars navigate the roads safely, customer queries are answered promptly, and job applicants are selected efficiently – all thanks to AI.

The practical applications of AI are limitless. From IBM Watson’s cutting-edge NLP system, which outperformed human experts on the game show Jeopardy, to OpenAI’s ChatGPT, revolutionizing text-based conversations, AI is revolutionizing communication. Google’s writing assistant, Bard, assists in producing high-quality content, while AlphaFold revolutionizes drug discovery by predicting protein structures. Computer Vision technology accurately identifies objects, streamlining processes, and self-driving cars are revolutionizing transportation. Furthermore, NLPs enhance the efficiency of customer support centers, while image recognition solutions read documents for compliance and validity or provide visual inspection of potential product deficiency or machine failure. AI’s potential applications expand daily.

AI Bias is Possible

However, we must approach AI with caution. It has so much potential and capabilities that its misuse can lead to detrimental and long term effects. AI can be subject to bias and discrimination based on the data it uses or associated attributes, which may not necessarily include information such as age, gender, and ethnicity but data with attributes that lead to the association to these data points. Here are some potential scenarios to consider:

  1.  Job applications: Companies should be vigilant about sensitive attributes like age and ethnicity, even if they are indirectly inferred by the AI from other attributes.  This can lead to unfair exclusion of certain groups of applicants without the recruiters being consciously aware of it.
  2.  Security and policing: Racial profiling, a form of bias, can occur if facial recognition systems are trained on data that discriminates against specific groups of people given that there’s more data available on areas that they live in.
  3.  Healthcare: While AI and machine learning solutions can predict adverse conditions and coordinate care, organizations must ensure that these technologies do not reinforce existing biases or discriminate in providing care due to skewed or insufficient data. Otherwise, they risk failing to serve the areas or groups that need care the most.
  4.  Marketing: AI techniques used in marketing can effectively generate custom content tailored to specific segments or target customers. However, the same technology can also produce misinformation and conspiracy theories that manipulate people’s thinking, raising ethical concerns.

Responsible AI Framework

To mitigate these risks, industry leaders like IBM, DataPrime, and Google have implemented Responsible AI frameworks. Let’s examine their core pillars or guiding principles.

First, we’ll take at IBM‘s AI implementation framework1, which is guided by five core pillars:

  1.  Explainability: Building trust by making transparent decisions and providing clear explanations.
  2.  Fairness: Addressing biases and promoting inclusivity by assisting humans in making unbiased choices.
  3.  Robustness: Protecting AI systems against threats and ensuring their reliability.
  4.  Transparency: Sharing information about AI use with stakeholders of diverse roles.
  5.  Privacy: Prioritizing and safeguarding employee privacy rights throughout the AI lifecycle.

Secondly, DataPrime emphasizes Accountability, Impartiality, Resilience, Transparency, Security, and Governance as vital aspects of responsible AI. Each element is described below, with detailed information within the original article:

  1.  Accountable: Algorithms, attributes and correlations are open to inspection.
  2.  Impartial: Internal and external checks enable equitable application across all participants.
  3.  Resilient: Monitored and reinforced learning protocols with humans produce consistent and reliable outputs.
  4.  Transparent: Users have a direct line of sight to how data, output and decisions are used and rendered.
  5.  Secure: AI is protected from potential risks (including cyber risks) that may cause physical and digital harm.
  6.  Governed: Organization and policies clearly determine who is responsible for data, output and decisions.

Finally, Google puts focus on the following principles:

  1.  Fairness addresses the possible disparate outcomes end users may experience as related to sensitive characteristics such as race, income, sexual orientation, or gender through algorithmic decision-making.
  2.  Accountability means being held responsible for the effects of an AI system. This involves 3 dimensions. 
         • transparency, or sharing information about system behavior and organizational process, which may include documenting and sharing how models and datasets were created, trained, and evaluated. 
         • interpretability, which involves the understanding of ML model decisions, where humans are able to identify features that lead to a prediction.
         • explainability is the ability for a model’s automated decisions to be explained in a way for humans to understand.
  3. Safety includes a set of design and operational techniques to follow to avoid and contain actions that can cause harm, intentionally or unintentionally.
  4.  Privacy practices in Responsible AI involve the consideration of potential privacy implications in using sensitive data. This includes not only respecting legal and regulatory requirements, but also considering social norms and typical individual expectations.

Create Your Own Responsible AI Framework

Above are just examples that have been formally shared with the public. Each organization has the flexibility to define their guiding principles and shape their own responsible AI framework. However, what matters most is putting these principles into practice, despite the complexities of AI solutions.

Overall, AI holds immense potential for improving our lives. However, it must be used with ethics and responsibility in mind. By adopting frameworks and prioritizing guiding principles, we can harness the power of AI while safeguarding against its potential harm.

Is your organization thinking about Ethical AI? Does your organization have a Responsible AI Framework?  What are your guiding principles?

Don’t forget to check out our blog about Microsoft Whiteboard.

Scroll to Top