Insight | 07.25.24
Insight | 07.07.21
Artificial Intelligence:
Fear and Promise in the Age of Change
Magic of 3
A question often heard these days is, “What is your deepest fear?” This quote was made famous in the movie Coach Carter. Artificial intelligence has created fears on many levels. Three fears are emerging as our “deepest fears.” Since there is something magical about the number three, we will focus on these “deepest fears,” and how they can become opportunities.
Opportunity 1: Bias = Discrimination
Unsupervised artificial intelligence models are composed of learning model data, outcomes and constraints, each present bias risk when being defined. Learning model data can be biased based by intent, omission, or improper analysis. Even with diligence in these areas, learned algorithms can reconstruct bias and or prejudice. The risk being that unsupervised models are “black box,” in the nature on how learned models are derived.
Examples of artificial intelligence models viewed as has having some bias:
- Microsoft’s TayandYou chatbot trained with hard-coded topics to avoid, had to be shut down because of politically incorrect phrase usage. The model learned this during only 16 hours of operation and 96000 tweets.
- Google using personal information, browsing history and internet activity, was more likely to display ads for high paying jobs for men versus women.
- A University of California, Berkeley team isolated bias in an artificial intelligence model used to determine call allocation to 200 million patients in the United States. The outcome resulted in lower standard of care for black patients.
Opportunity 2: Explainability
A model trained to differentiate between wolves and dogs in testing demonstrated a high degree of accuracy. When put in operation it experienced a significantly higher error rate. Investigation determined that training data used images with wolves using a background of snow and dogs with spring and summer like backgrounds. A wolf with the summer-like background was being identified as a dog. The model based on the training environment established an algorithm based on the background not the animal.
It is important upfront that the predictors from the objectives are clearly understood. The understanding should be based on what actions should be taken based on model outcomes and can be understood by humans using the model. These insights increase transparency, transforming mistrust/distrust to trust.
This is the crux of explainability. Model definitions should be clear and concise. Source data used for the learning environment should be explained and vetted. The data input and output relationships should be outlined. Finally, clear procedures should be identified and how they will be implemented for verifying and maintaining model operations. Clear and concise model definitions will build trust, starting with what the model is intended to accomplish. Even more trust will be built by identifying unusual circumstances where control should be passed back to a human or when the model is challenged by situations that it cannot handle. This last point could have addressed the critical moment in 2001 Space Odyssey, where HAL denies making errors and does not want to cede control.
Opportunity 3: Knowledge Transfer
Knowledge transfer should be viewed as a two-way street. Supervised learning guides model development with labelled data having known relationships between data inputs and outputs. The higher order learning of supervised learning takes embedding and considers thousands of inputs and potential relationships which human experts, in total, cannot comprehend. The abstractions that power unsupervised models are the very things that people fear. There is no transfer of knowledge back to humans, no explanation of how the model works.
The risk increases as models share data, create data, and interact with other models. Risk management processes need to be in place to see relationships beyond that data resulting in tenuous correlations and learn behaviors that were not properly constrained during development. This lack of transparency and knowledge transfer will keep artificial intelligence models under surveillance for explainability and operating with humans and other models. Ultimately answering the question for each model, “At what point should humans intercede and when stopping the model is the best course of action?”
Hope for the Future
Thoughtfulness is needed to address these opportunities with Artificial Intelligence. Emerging AI tools are providing a structured approach in addressing these concerns. One proposed approach for documenting models is Model Cards, which require the following:
- Intended use (uses & users)
- Training algorithms and fairness considerations
- Demographic & Environmental factors
- Performance measures (model, decision sensitivity)
- Learning data sets
- Ethical considerations
Another proposed documentation approach for data sets is Data Sheets. These will aid creators and users of data by capturing:
- Purpose and Creator
- Data set instances and contents
- Data relating to people and associated sensitivities
- Document data quality issues
- Data collection strategy
- Standardization of data terminology
- Audit data interaction with users
- Document data capture, modification, transformation.
Together these tools and tools addressing similar purposes will provide transparency and accountability. They will be important inputs for determining the appropriate levels of risk management and regulation, which for now, seem inevitable.
While all of this new computer science is unfolding before our very eyes, Yalo is already engaging in this bold new frontier. We now feature sentiment analysis tools & services powered by AI, to help us serve our clients better as we collaboratively build better brands. We’re excited about what this future will bring.