Artificial intelligence is already used in healthcare, retail, manufacturing, banking, and just about any other industry you could name.
It has the potential to change the way we live and work but — whether it’s curing cancer, fighting climate change, or reimagining recruitment processes — we're faced with a lack of consistent and universally applicable standards for Ethical AI.
In the current state, vendors across all industries can call their technology ethical based on a self-assessment, without the input of legal, ethical, or global regulatory experts. The challenge here is around independence and ensuring organizations, individuals and society have the confidence to trust AI, particularly when it is deemed ethical.
Australia’s Artificial Intelligence Ethics Framework and other recommendations are a good place to start. But for AI to achieve its real potential we need industries, governments, and society to come together to pave the way forward.
When developing and deploying AI, every potential application should consider how the technology will be used and who will be responsible for it. Ethical risks like bias, privacy, and transparency must be factored in from the start. It’s complex and, without a background in ethics and data, it can be difficult to feel confident that you’re using AI for good.
When we think about Ethical AI, it isn’t just one discrete area of the data, the algorithm, or the model itself. The ethical deployment of AI considers the end-to-end approach, inclusive of the data, the process, and the people involved.
At Reejig, we have an unwavering stance on the ethical development and deployment of our AI. We want our customers, their people, and society to have the confidence to trust the data and recommendations which we provide.
For this reason, in partnership with the University of Technology Sydney (UTS), we quantitatively and qualitatively test that our algorithms meet global regulations on human rights, anti-discrimination, and equal opportunity.
With a panel of data, legal, ethical and regulatory experts, UTS assesses our work to ensure we’re effectively eliminating bias from the data, the models, and also the process of building our AI models. It checks our AI talent matching algorithm against fairness, transparency, accountability, privacy, and security — the four key ethical principles it identifies as most critical.
We run them through methods for eliminating bias from our datasets, how AI learns and improves, how we manage role matching and search terms, and the capabilities of our platform. We also work with industry partners to test that the AI’s shortlisted candidates are fit for purpose and aligned with the role requirements.
The result is we’re now the world’s first independently audited Ethical Talent AI engine certified by the UTS and importantly, setting a new benchmark in trust and ethics for our industry.
Implementing AI into organizational design is not a simple task. Understanding what it is being used for, how it’s going to be used, and who’s going to use it is critical to the decision-making process. No matter what talent strategy you are planning to solve with new technology, there are going to be levels of ethical risks such as bias, privacy, and transparency.
Use this checklist to help your decision-making process in adopting Ethical Talent AI and close the gaps in your understanding of the ethical use of talent AI. The checklist is designed to give you the confidence to have knowledgeable conversations with vendors to determine trust in the system.
Now is the time for us to go beyond current recommendations to remove a key barrier to AI adoption. We need governments to provide firm guidance on how best to address ethical issues, but business and the academic community must also play a part. An ethical approach to AI should be central to business operations, from workflow and customer feedback to building models and process engineering.
Ethical AI offers a huge opportunity to get ahead of the competition. Bias sneaks into decision-making when there isn’t enough data to make a fully informed decision. Taking AI ethics seriously leaves no room for bias, ensuring a stronger, more diverse workforce.