Welcome back to this new edition of Gov CIO Outlook !!!✖
NOVEMBER 20238GOVERNMENT CIO OUTLOOKIN MYOPINIONLREGULATING AI - THE GENIE IS OUT OF THE BOTTLEarge language models (LMs) and generative AI have redrawn the frontier of what we thought artificial intelligence (AI) could do. Ask any current generation of AI tools to whip up a short biography of your favourite artist, and you will get a succinct summary. Ask it to write a song in the style of this same artist, and you will get something impressive. What has changed is the way AI works and the size of the datasets used to train the AI. Generative AI is trained to `focus' and is training on datasets of unimaginable sizes to mere mortals, literally trillions of examples. This unsupervised training occasionally leads to some surprises. When presented with a supposedly factual response from your AI query, some results may refer to `real world' sources that simply do not exist. Similarly, a request to generate an image from a verbal description may lead to something a little more `Salvador Dali' like than you may have expected. This scaled up version of an age-old adage of `garbage-in-garbage-out' leads to the modern twist `garbage-in-sometimes-hallucination-out.' Nonetheless, the responses from the latest generation AI tools, are pretty impressive, even if they need to be fact checked. So, what does this mean for people thinking of regulating AI or putting AI policies in place? AI Is Different To Other Technologies: Some of the concerns raised about AI could just as readily be applied to other technologies when first introduced. When addressing concerns with the use of AI, if you replaced `AI' with `quantum', `laser', `computer' or even `calculator', some of the same concerns arise about appropriate use, safeguards, fairness, and contestability. What is different about AI is that it allows systems, processes and decisions to happen much faster and on a much grander scale. AI is an accelerant and an amplifier. In many cases, it also `adapts', meaning what we design at the beginning is not how it operates over time. Before developing new rules, existing regulation and policy should be tested to see if it stands up to potential harms and concerns associated with those three `a's'. If your AI also generates or synthesises, then a few more stress-tests are needed as this generation goes well beyond what you can expect from your desktop calculator. Ian Oppermann, Government Chief Data Scientist, Department of Customer Service, NSWBy < Page 7 | Page 9 >