The ‘Ethical AI in Australian Businesses’ study has found AI is seen as the number one technology that will drive business change in the next three years, surpassing the rollout of 5G and the use of predictive analytics.
The survey canvassed the views of 562 IT, customer experience and digital decision-makers in Australian businesses with at least 20 employees to uncover their attitudes to AI.
The research by LivePerson and Fifth Quadrant found AI is already prevalent amongst more than a third (35 per cent) of Australian businesses with 10 per cent of respondents using AI broadly within their business and 25 per cent with limited use of AI.
Almost half of those surveyed are planning to introduce AI (22 per cent currently implementing, 27 per cent planning to implement) with the top use cases being in operations (77 per cent) and customer service (72 per cent). Companies using/implementing AI reported stronger outcomes in terms of employee productivity and satisfaction, as well as customer satisfaction and retention.
There are however concerns about the potential impact of AI: Businesses are most commonly worried about cybercrime, with technology theft, loss of privacy or data breaches, and the potential for inbuilt bias within AI systems common concerns.
85 per cent of business owners were worried about technology falling into the wrong hands, alongside privacy issues and unauthorised access to data (84 per cent).
Most businesses (82 per cent) are taking steps to lessen the risk of negative outcomes, conducting risk assessments (36 per cent), monitoring industry standards (31 per cent), and providing ethics training for employees (31 per cent).
However, when it comes to AI accountability, Australian businesses are looking to the Australian Government for leadership around AI regulation and enforcement of standards: Most businesses feel that the responsibility for setting and enforcing AI ethics and principles in Australia should sit with the government, or an independent Australian body.
Just two in five (40 per cent) Australian businesses have AI standards or guidelines in place, primarily those that have more fully implemented AI systems within their business. However, 40 per cent of businesses feel the Australian Government should be responsible for setting and enforcing AI ethics and principles, followed by an independent Australian body (25 per cent).
The findings come off the back of a report by the ACOLA (Australian Counsel of Learned Academics) which called for the government to develop a national artificial intelligence strategy. The report suggests the rise of AI is inevitable and it will disrupt Australian life in every possible way.
The Effective and Ethical Development of AI Report suggests there is an opportunity for the development of strategy to ensure AI is used ethically to improve our wellbeing.
ACOLOA suggests proactive engagement, consultation and ongoing communication with the public about the changes and effects of AI will be essential for building community awareness. ACOLOA says earning public trust will be critical to enable acceptance and uptake of the technology.