How does independently audited Ethical AI support your organization’s digital journey toward Zero Wasted Potential?
What is Ethical Talent AI? Why do you need to check that the AI you’re using has been independently audited? What are the applications of AI in workforce technology, and what is the urgency around getting it right? What happens if we don’t?
More specifically, how can employers embrace Ethical AI as a people-first business strategy, not wait for it to become a compliance or regulatory nuisance?
Like any innovation meant to drive progress and value, employers can be proactive in thinking about the ethical, strategic, and business-impacting applications of artificial intelligence to support their talent strategies and workforce management. What’s more, people have an expectation that employers should behave ethically. In that regard, employers have a responsibility not just to think and deploy ethical practices but to prove their practices are ethical. As in, put it on display. That’s how you build trust.
Read on to understand the opportunities and the risks of AI-driven talent and workforce intelligence solutions. You'll learn considerations for applying AI in a way that improves your business outcomes and employee experience. You’ll also better understand what you should expect from your HR technology vendors to ensure your operations are aligned to your organizational values and that you are legally compliant with the latest AI and people analytics laws.
Warning on Ethical Talent AI!
Currently, there are no consistent and universally applicable standards for Ethical AI. That means vendors across all industries can call their technology ‘ethical’ based on a self-assessment, without the input of legal, ethical, or global regulatory experts. But the growing examination of the impacts of AI, how machine learning processes are built, and what algorithms they learn on, has led many governments and lawmakers to call for official regulations to be put in place. These regulations will ensure HR AI vendors are independently audited and held accountable, giving you confidence in your talent decision-making support.
When you're dealing with Ethical AI, remember to ask your vendor if their Ethical AI has been independently audited. If not, why not? Learn more: On Demand: Unpacking how new Ethical AI laws will impact workforces in 2023 and beyond
Digital transformation of workforce experience includes Artificial Intelligence
Digital transformation of HR and workforce experience is required if we are to drive necessary progress toward the future of work. In a myriad of ways, digital transformation helps us reimagine and redesign work in ways that develop rather than impede human potential.
Every organization professes to be people-first, but being truly human-centric in our talent practices and workforce management requires that we consider the needs, best possible experience, and necessary outcomes of people. Digital transformation allows us to do that. In fact, done well, a digital strategy can achieve the optimal outcome for both people and business: Zero Wasted Potential
To move in that direction, organizations are challenged to design people, processes, and technology in ways that feel personalized, relevant, and frictionless. Digitization also considers flexible working conditions, new models of work, and reimagining all the ways in which work can get done – which includes automation. In the broadest terms, digital transformation is a challenge to employers to meet the rising but altogether reasonable expectations of the workforce to be fully known, to be fully understood as whole people who are a dynamic collection of skills and desires, and to be fully activated for all our potential.
One vehicle for delivering on this promise is Artificial Intelligence. We have a hard time agreeing on a common, simple definition of Artificial Intelligence, and maybe that’s because AI is a combination of things. At its heart, Artificial Intelligence (AI) is the ability of a machine to “sense, comprehend, act, and learn with human-like levels of intelligence.” A simple example of AI in our everyday lives is the way you might ask Siri to perform a task or Google search on your phone as your digital assistant. In a work environment, simple AI might scan resumes or spot biased language in a job posting. A more sophisticated example of AI in business is intelligent career planning, where relevant learning paths might be suggested based on individualized career goals.
AI presents highly relevant and powerful opportunities when it comes to optimizing your workforce and promoting ethical talent strategies to unlock human potential. For many reasons, organizations are on a mission to build talent resilience, or a sustainable framework to ensure people trust they can stay and grow with an employer. This drives retention and loyalty, unlocks capacity, and reduces waste.
Building organizational resilience requires visibility into the skills of your workforce coupled with ways to match and nudge talent to opportunity, drive upskilling and reskilling where needed, and enable flexible pathways between the right employee or candidate and timely business need. Artificial Intelligence can do all that and more, but it needs to be deployed with intention, leveraged ethically, and evaluated for the right measures of success.
Artificial Intelligence is only as good as we use it
The current focus around artificial intelligence is Ethical AI. At a minimum, Ethical AI is thoughtful consideration around how a technology will be used and who will be responsible for it. It considers ethical risks like bias, privacy, and transparency. More importantly, Ethical AI must leverage the independent input and regulation of legal, ethical, or global regulatory experts to ensure it’s being used for good — only when there is proof of an independent audit of Ethical AI can you rest assured that AI is truly being used for good. (Read more about Ethical AI and why it matters for your business.)
It’s not that AI is inherently unethical, it’s that it is vast, ripe with opportunity, and a playground of use cases. It’s also a playground for bias if not designed and deployed with careful intention. Bias is inherent; it cannot be eliminated. It can, however, be detected and reduced, so tools that detect, reduce, avoid, and consider bias can help drive fairness and transparency around people and business processes. When done right, AI can create a win-win relationship between employers and their people. Why, then, is it being regulated?
Let’s hope it is a safe assumption that no organization sets out to infringe upon human rights when it leverages Artificial Intelligence to support business processes. The consensus achieved by regulation is not only on what needs protecting but on the values to which we must adhere in deploying AI for good.
One value we might uphold in regulating the ethical use of AI is the protection of privacy. “You can have a positive impact on my life by presenting me with information and experiences that are highly tailored to who I am, where I live, how I work or spend my time, and what I’ve indicated I am interested in.” But folks deserve the right to know what of their information is being used and consent of such use. Standardized regulation creates common language, interpretation and expectations of data privacy without any assurance of compliance, governance, audit, and rigor.
Reejig has proudly pioneered the way for independent audits and ethical decision making support. We have always taken an unwavering stance on the ethical development and deployment of our AI, which is exactly why in 2020, we engaged University of Technology Sydney to conduct an independent audit on our Talent AI. This audit not only meant Reejig now has the world’s first independently audited Ethical Talent AI, but it set a new benchmark in trust and ethics for our industry. This audit alone included cross-faculty collaboration with assessments from data science, business, law, and social justice departments, reflecting that ethics in AI is no longer just a data science issue.
This innovation was recognized by the World Economic Forum, who in 2022, named Reejig as a Technology Pioneer for our innovation in developing the world’s first independently audited Ethical Talent AI and our mission toward Zero Wasted Potential.
Don’t wait for Ethical AI legislation; make it part of your Employee Value Proposition
Ethical AI is a critical tool in any conversation around digital transformation, talent strategy, and workforce experience. Designed with intention and used for good, independently audited Ethical AI supports and validates better talent decisions, including critical decisions around assessing, hiring, developing, promoting, retaining and optimizing your #1 asset, your workforce. Most importantly, it supports Zero Wasted Potential for those organizations looking to reduce redundancies, build agility and sustainability into the workforce, and fully activate human potential.
Your people strategies should uphold promises you make in your Employee Value Proposition, or the promise you make to people in exchange for their time and talent. That’s why every employer who intends to keep talent brand promises will proactively review and refine the way Artificial Intelligence is applied and leveraged in their business.
But to ensure employers take ethics seriously, the government is stepping in and providing guidance and regulation. Beginning in New York in January 2023, the United States has taken steps to guide the design, use, and deployment of automated systems. A number of states are already regulating automated employment decision tools (AEDTs) that leverage AI to make, or substantially assist, candidate screening or employment decisions. Similarly, the Equal Opportunity Employment Commission (EEOC) launched an initiative on “algorithmic fairness” in employment. “As an initial measure in this initiative, the EEOC and Department of Justice jointly issued guidance on the use of AI tools in employee hiring. The guidance focused on AI that can, even if unintentionally, violate laws by screening out employees with disabilities.” The U.S. is aligning with the European Union for globalized regulatory controls, and other countries won’t be far behind.
The cliff notes are this: Employers don’t have to wait to be regulated by the government. They can and should establish their own ethics standards around the opportunity, application, and governance of AI, and this should include partnering with technology companies who adhere to the same rigorous and ethical standards.
Most importantly, it matters. Better designing work for people need not sacrifice their personal data privacy, a fair and equitable process designed to detect and reduce bias for the best possible outcome, and ethical automation improvements for the business. Everyone can win when we leverage Ethical AI in your organization’s digital journey toward Zero Wasted Potential, but you must partner with your technology providers to ensure it’s being leveraged well and correctly.
A note to takeaway
Independently audited Ethical AI is a critical tool in the pursuit of Zero Wasted Potential. Zero Wasted Potential is how we can do better by people, by our organizations, and by society. To be used ethically, applications and automation leveraging AI must be understood, designed with intention, managed and governed. This is how we can assure people we are using technology responsibly, treating data ethically, and managing privacy carefully. This is also how we ensure we are unlocking the full potential of AI and the people who stand to gain from its advancements.
- 2023 will be the year we see the acceleration and proactive adoption of ethical AI legislation
- Employers can start by understanding Ethical AI, how it supports Zero Wasted Potential, and ways to manage and govern it
- They can also get ahead of legislation by understanding how their technology should be self-audited; this includes partnering with independently audited AI technology companies
About the authors
Siobhan Savage | CEO & Co-Founder, Reejig
Siobhan is an award-winning workforce leader obsessed with developing inclusive workforce intelligence and designing meaningful careers to unlock potential at scale. For almost two decades, Siobhan has worked to deliver workforce optimization and resilience globally, and after experiencing firsthand the impact of not having visibility into the skills and capabilities of people within workforces, set out to recreate the organizational, people leader, and talent experience to one with complete visibility and Zero Wasted Potential.
Jason Averbook | Senior Partner, Global Leader Digital HR Strategy, Mercer | Leapgen
Jason is a leading analyst, thought leader, and consultant in the area of human resources, the future of work and the impact technology have on that future. He is the Co-founder and CEO of Leapgen, a digital transformation company helping organizations shape their future workplace by broadening executive mindsets to rethink how to better design and deliver employee services that meet the expectations of the workforce and the needs of the business.
Prior to founding Leapgen, Jason Averbook served as the CEO of The Marcus Buckingham Company (TMBC). In 2005, he co-founded Knowledge Infusion LLC and served as its CEO until 2012, when the company was sold to Appirio. Earlier in his career, he served as the Chief Business Innovation Officer at Appirio Inc., where he led the HCM business. He has also held senior leadership roles at PeopleSoft and Ceridian Corporation. Jason has more than 20 years of experience in the HR and technology industries and has collaborated with industry-leading companies in transforming their HR organizations into strategic partners.