Domain Registration

5 reasons AI isn’t being adopted at your organization (and how to fix it)

  • July 14, 2020
3d illustration of virtual human on technology background.


Image: Getty Images/iStockphoto

Like most nebulous technologies marketed as the cure-all for the enterprise in the 21st century, artificial intelligence–and more specifically anyone tasked with selling it–promises a lot. But there are some major obstacles to adoption for both the public and private sector, and understanding them is key to understanding the limits and potential of AI technologies as well as the risks inherent in the Wild West of enterprise solutions. 

Consulting firm Booz Allen Hamilton has helped the US Army use AI for predictive maintenance and the FDA to better understand and combat the opioid crisis, so it knows a thing or two about getting large, risk-averse organizations behind meaningful AI deployments.

For insights on where AI still stumbles, as well the hurdles it will have to clear, I reached out to Booz Allen’s Kathleen Featheringham, Director of AI Strategy Training. She identified the five greatest barriers to AI adoption, which apply equally to public and private sector organizations. 

Note: The below answers to interview questions have been rearranged and formatted slightly to obtain listicle perfection. All language is Kathleen’s, with thanks for her keen insights.

1. Governance Ethics

AI governance – or the lack thereof. As with any powerful technology, AI requires structure in its implementation, which should govern its capabilities, and ethical principles.

It’s important to remember that AI solutions are built by imperfect humans. We’ve seen examples of models that unintentionally generate discriminatory outcomes because the underlying data was skewed towards a particular segment of the population. Whether they resulted from bias in the dataset (e.g., exclusion or sample bias) or from humans’ unconscious biases, these outcomes rightly erode trust in the technology and slow adoption. 

So how do we fix it?

We must balance freedom, ethics and privacy with efficiency and other benefits AI makes possible. This foundation for AI requires that people at all levels of an organization understand their role in building a governance structure. A strong governance system includes a set of ethical design and development principles that are regularly reviewed, creating a “feedback loop.”

It’s important to consider these three points when developing a governance framework for AI: 1. Prioritize ethics early. 2. Build robust, transparent, and explainable systems that clearly yield an audit trail with the understanding as the models learn these can adjust. 3. Ensure measured, monitored roll-outs with robust governance and oversight, guided by clearly document processes. 

2. Culture Talent

Although AI could be the most transformative technological development of our lifetime, a methodical approach to implementation and adoption is critical. This starts with readying the organization from a cultural standpoint, enabling adoption through effective education on the technology (and therefore trust in it) and offering the necessary technical training.

There has been too much concentration on one-and-done tool trainings which hamper the development of the next generation of data engineers, of which there is a critical shortage currently. Continual education and training over years is needed to evolve their tradecraft/skills and speed adoption. 

So how do we fix it?

Successful and ethical adoption of AI relies on people who understand and are empowered to put this technology to work. This means building a diverse and AI-knowledgeable workforce, creating opportunities for upskilling and learning across disciplines.

It is equally important to communicate the organization’s objectives clearly while giving employees a voice in how AI will affect the workplace.

3. Data Security

The data and systems operated by AI must be protected from both accidental and malicious interference. There are bad actors who attempt to change AI outcomes by “poisoning” underlying data. A familiar example is a few pieces of tap that trick an autonomous car into seeing a speed limit road sign as a “Stop” sign. This and privacy are very real concerns given AI must be entrusted with a certain amount of autonomy to perform its tasks.

AI is still vulnerable to adversarial attacks where it can be “tricked” and its analytical capabilities put to nefarious use. Given the vast amounts of data AI’s needs to perform, protecting that data becomes of paramount importance. And since AI’s decision-making process is still largely a black box, this is a vulnerability that causes great concern.

The solution? It’s all about transparency …

4. Transparency

Because AI is still evolving from its nascency, different end users may have wildly different understandings about its current abilities, best uses and even how it works. This contributes to a blackbox around AI decision-making. To gain transparency into how an AI model reaches end results, it is necessary to build measures that document the AI’s decision-making process. In AI’s early stage, transparency is crucial to establishing trust and adoption.

While AI’s promise is exciting, its adoption is slowed by historical fear of new technologies. As a result, organizations become overwhelmed and don’t know where to start. When pressured by senior leadership, and driven by guesswork rather than priorities, organizations rush to enterprise AI implementation that creates more problems. 

Which leads us to …

5. Data Infrastructure Readiness

AI often relies on large volumes of historical data and sophisticated mathematics. Before an AI project can be implemented, organizations must achieve a certain level of data and infrastructure readiness. Common barriers include data shortcomings and disparate data sources, lack of technological infrastructure, testing inefficiencies and collaboration issues.

AI needs a strong infrastructure as its foundation, including high-performing and scalable computing systems, high volume storage systems, and GPU architecture. The process of effectively developing, deploying, and monitoring models in production environments is time-consuming and many organizations simply do not know how to operationalize their data platforms at enterprise scale. Furthermore, the data that AI utilizes must be significantly scrubbed, but organizations have not invested properly in doing so, which limits the insights AI and predictive analytics can provide. Failure to invest in and establish a strong infrastructure is responsible for much of the estimated 90 percent of AI models that are never put into production.

How to do it right?

Organizations that understand their organizational mission, data and infrastructure, and ethical needs and articulate that in a robust AI strategy can hit the ground running. During design and development, organizations must leverage strategies like human-centered design to ensure end users’ needs inform system design. Strong data strategies include standardized methods for labeling, validating, cleaning, and organizing data across an enterprise. Choosing an open source platform solution will yield crucial insights into the health and lineage of data and can remove organizational data siloes and allow for a better, enterprise-wide approach to data management. Finally, investment in the infrastructure (e.g., cloud, GPUs) needed to support AI solutions is a critical foundational step as computing power is essential to enabling AI.

Ultimately, spending the time upfront to organize, prioritize, and execute against mission, data and infrastructure, and ethical needs is the best way to position organizations for long-term success.

Will more organizations and enterprises come to embrace AI?

We have seen positive signs that the private sector is ready to embrace AI and Advanced Analytics—and in many cases already has. As both the public and private sector navigate expected challenge in this journey, we’re hopeful as history shows us that technology transformation is more a question or when than if. And, AI has attracted many leaders both in technology and adjacent fields, creating a robust and necessary discussion about how we build and deploy AI. 
Additionally, it’s encouraging to see that there are many in industry have a perspective of ‘Don’t Go It Alone,’ developing important partnerships that bring all the pieces together. Booz Allen, for example, has been working to demystify AI for the public sector, working together to bring NVIDIA’s deep learning training to the Federal sector. Together, we’ve trained people from more than 15 government organizations within just the last year.

Ultimately, we are excited about the AI-powered possibilities that lay ahead. AI has already plays an important role in combating cybercrime and it helped speed our global response to the COVID-19 pandemic. It is important that we remember, however, that AI is ultimately an enabler that will help humans tackle seemingly complex challenges.

Article source: https://www.zdnet.com/article/5-reasons-ai-isnt-being-adopted-at-your-organization-and-how-to-fix-it/#ftag=RSSbaffb68

Related News

Search

Get best offer

Booking.com
%d bloggers like this: