2016 certainly seems to be the year that AI starts to make its mark in mainstream media.  There's been a lot of talk of its involvement in cars, healthcare, security and now, financial services.

I've written previously about the need for the legal world to adapt to the speed of change and the challenges AI presents (http://mediatech.footanstey.com/post/102df8f/adapting-the-law-for-an-ai-world) and this article in American Banker stood out for me as it looks at the challenges also faced by financial services as AI's influence on decision making, offers and product management grows.

It's an interesting time for an industry still under the spotlight from all the fall outs from the credit crunch.  It's under pressure financially from continued depressed interest rates, regulatory fines and investigations, consumer expectations and consumer bodies and AI offers a way to drive efficiencies.

And yet big questions remain about how it could influence our own use of, and access to, an industry that is so important to health, wellbeing, quality of life, prosperity and opportunity.  

Is it realistic to remove human involvement completely from the process? Can the behaviours that a machine learns really be controlled?  Who decides the morals the machine is governed by? What social and societal differences can be accommodated?

The bit that really stuck out for me as a lawyer (and as we face into the changes on the horizon from the General Data Protection Regulation due in 2018) was the use of data by AI machines.  

As more of our lives go online, and as more data about us is stored with a wide variety of organisations, how do we govern and control what data is used to make financial decisions about us?  Will we truly be given the freedom to decide if that processing takes place, or will it simply a requirement of taking the product?

And, if at the end of the day, the computer says 'no' is that it?

We've seen as recently as yesterday, Theresa May dismissing the idea of a points based immigration system due to the fact that box ticking doesn't necessarily lead to the right decision (that's a debate for a different day), but the same principle applies here surely?  

Does there not need to be a level of judgement outside of pre-defined criteria looking at the so called 'bigger picture'?  Is AI as you and I know it, capable of that yet?  

AI is not going away, and it is these and other questions that remain unanswered that means it's a fascinating area to watch.