Even though the hype around AI is sky high, has the technology proven to be useful for enterprises. As AI enables companies to move from experimental phase to new business models, a new study indicates errors can be reduced through careful regulation of human organisations, systems and enterprises. This recent study by Thomas G Dietterich of Oregon State University reviews what are the properties of highly reliable organisations and how enterprises can modify or regulate the scale of AI. The researcher says, “The more powerful technology becomes, the more it magnifies design errors and human failures.” And the responsibility lies with tech behemoths, the new High-Reliability Organisations (HROs) that are piloting AI applications to minimise risks.
Most of the bias and errors in AI systems are built in by humans and as companies across the globe build great AI applications in various fields, the potential for human errors will also increase. In an earlier article, we wrote how public framework works on the three most important aspects of AI safety — specification, robustness and assurance.
Sign up for your weekly dose of what's up in emerging technology.
High-End Technology And Its Consequences
As AI technologies automate existing applications and create new opportunities and breakthroughs that never existed before, it also comes with its own set of risks, which are inevitable. The study cites Charles Perrow’s book Normal Accidents, written after a massive nuclear accident that delved into organisations which worked on advanced technologies like nuclear power plants, aircraft carriers, and electrical power grid among others. The team summarised five features of High-Reliability Organisations (HRO):
- Preoccupation with failure: HROs know and understand that there exist new failure modes that they have not yet known or observed.
- Reluctance to simplify interpretations: HROs build an ensemble of expertise and people so multiple interpretations can be generated for any event.
- Sensitivity to operations: HROs maintains human resources who have deep situational awareness.
- Commitment to resilience: Great enterprises and teams practice recombining existing actions. They have great procedures and acquire high skills very fast.
- Under-specification of structures: HROs give power to each and every team member to make important decisions related to their expertise.
AI Systems And Human Organisations
There are some lessons that the researchers draw from many circles where advanced technology was deployed. Traditionally, AI history has been peppered with peaks and valleys and currently, the technology is seeing an exuberant time, as noted by a senior executive. As enterprises move to bridge the gap between hype and reality by developing cutting-edge applications, here’s a primer for organisations to dial down the risks associated with AI.
Download our Mobile App
- The goal of human organisations should be to create combined human-machine systems that become high-reliability organisations quickly. The researcher says AI systems must continuously monitor their own behaviour, the behaviour of the human team, and the behaviour of the environment to check for anomalies, near misses, and unanticipated side effects of actions.
- Organisations should avoid deploying AI technology where human organisations cannot be trusted to achieve high reliability.
- AI systems should be continuously monitoring the functioning of the human organisation. This monitoring should be done to check for threats to high reliability of the human organisations.
In conclusion, the researcher states, “In summary, as with previous technological advances, AI technology increases the risk that failures in human organisations and actions will be magnified by the technology with devastating consequences. To avoid such catastrophic failures, the combined human and AI organisation must achieve high reliability.”