WHEN ARTIFICIAL INTELLIGENCE became a mainstream topic of conversation in 2023, much of the initial conversations centered around if the “new” technology could be trusted, and whether it was going to replace millions of humans in the workforce.

Now that the dust has settled and AI is proliferating across business applications, we can look more objectively at the real-world implications of the adoption of such a transformative technology.

We hear daily about AI being used to help workers perform tasks in customer service, marketing, and even climate research. New job categories like “prompt engineer” are now common as AI advancements are leading to new opportunities that can improve how we work, live, learn, and interact with one another.

These opportunities demand open and transparent innovation, because it is essential to empower a broad spectrum of AI researchers, builders, and adopters with the information and tools needed to harness these advancements in ways that prioritize safety, diversity, economic opportunity, and benefits to all.

If we are to realize AI’s full benefits for business and society, a skilled workforce trained in the principles of trust and transparency is paramount.

In February, Gov. Maura Healey signed an executive order to establish the Artificial Intelligence Strategic Task Force to better understand the potential impact of AI on state government, the private sector, higher education, and individuals.

In addition, Healey will seek $100 million through economic development legislation to create an Applied AI Hub in the state.

These endeavors have the potential to help propel the growth of AI in the state, stimulate job creation, and elevate the Massachusetts economy.

I welcome the governor’s executive order on AI and the creation of the AI Task Force in Massachusetts. When governments prioritize responsibility and security, consumers across the state will be able to benefit from this powerful technology.

But a fundamental problem needs to be addressed that could significantly undermine AI’s full potential for business and society, and it’s the inverse of the initial concern many had about AI taking people’s jobs: There are not enough skilled workers to fill current and future AI-related jobs.

According to the IBM Global AI Adoption Index 2023, conducted by Morning Consult on behalf of IBM, the top barrier hindering successful AI adoption at enterprises exploring or deploying the technology is “limited AI skills and expertise.”

In Massachusetts, hiring skilled workers is a top challenge according to a recent survey of business leaders commissioned by the Massachusetts Business Alliance for Education and Student Pathways to Success.

According to the survey of 141 Massachusetts business leaders, 87 percent say it’s very (35 percent) or somewhat (52 percent) difficult to find people with the right skills. Meanwhile, 95 percent say it should be the top (26 percent) or among the top (69 percent) priorities of Gov. Healey’s administration to make improvements to our education system to ensure that more students are better prepared for college and/or careers.

Business leaders know a shortage of skilled workers can limit innovation, output, and productivity. Without access to talent or the means to upskill employees, employers may hesitate to fully explore AI opportunities.

With so much at stake, who should be responsible for closing the AI skills gap?

We all share responsibility. Business leaders statewide have a duty to be accountable for AI skills development across the entire talent pipeline and within their own workforce.

Opportunities for AI skill building through educational channels are abundant in Boston for learners of all ages.

Boston Public Schools offers AI courses, webinars, and summer programs to K-12 students. Bunker Hill and Middlesex community colleges offer AI classes and workshops to adult learners, while area universities, including MIT, Northeastern, and Harvard, offer advanced degrees in AI. Local startup Matrix Holograms is using AI to power holographic tutors for individualized student learning.

While the educational component to skill building is essential to help solve the skills gap over time, an urgent need for immediate skilling persists. According to a recent global study from the IBM Institute for Business Value, business leaders surveyed estimate that 40 percent of their workforce will need to reskill as a result of implementing AI and automation over the next three years.

This is why IBM recently announced a commitment to train 2 million learners in AI by the end of 2026 through free, online training, and educational partnerships.

As a foundational part of the AI skills development process, organizations should consider the following fundamental properties as they work towards establishing trust and transparency at every part of the AI lifecycle:

Transparency

Transparency reinforces trust, and the best way to promote transparency is through disclosure. Transparent AI systems share information on what data is collected, how it will be used and stored, and who has access to it. They make their purposes clear to users. 

Explainability

While transparency offers a view into the AI technology in use, simple and straightforward explanations are needed for how AI is used. Explanations need to be easy to understand, particularly about what went into an algorithm’s recommendations, and be relevant to a variety of stakeholders with a variety of objectives.

Fairness

Fairness in AI refers to the equitable treatment of all individuals by an AI system. Properly calibrated AI can assist humans in making fairer choices, countering human biases, and promoting inclusivity. An AI solution designed to be fair must remain that way.

Robustness

As AI continues to become more a part of our human experience, it also becomes more vulnerable to attack. To be considered trustworthy, AI-powered systems must be actively defended from adversarial attacks, minimizing security risks and enabling confidence in system outcomes. Robust AI effectively handles exceptional conditions, such as abnormalities in input or malicious attacks, without causing unintentional harm.

Privacy 

AI systems must prioritize and safeguard consumers’ privacy and data rights and provide explicit assurances to users about how their personal data will be used and protected. Systems should also enable consumers to choose how their personal data is collected, stored, and used, through clear and accessible privacy settings.

AI has powerful potential to help us discover novel medications, engineer next generation energy solutions, and enable new conveniences. However, business leaders, both local and global, must prioritize principles of trust and transparency in AI skills development if we are to realize the full benefits for society. 

Mohamad Ali is the chief operating officer of IBM Consulting.