In conversation with Gi Fernando – AI and Upskilling

Interview   –   31 May 2024

We spoke to Entrepreneur, Engineer, NED and angel investor Gi Fernando about AI upskilling and his work in this space. Gi co-founded global social technology leader Techlightenment (sold to Experian plc), the first company to provide a tool on top of the Facebook Ads API. Gi is also Chairman of Freeformers, which he founded in 2012, with a mission to use accelerated learning techniques to help young people, from all backgrounds, to develop high impact digital skills. We also discuss his work with Makers which delivers transformative technical education to bridge people into future-facing careers. He is passionate about using tech to create a level playing field through unlocking digital skills in young people and enabling greater diversity and social inclusion.

Why is now the time when we should really be thinking about AI based upskilling?

It's an interesting question, isn't it. Why now or if not now, when, is probably a good way of thinking about it. I think technology helping with work has been happening for quite a long time and companies and governments have historically been really poor at transitioning to the changes brought about by technology. If you just think about, say, coal miners, I don't think people knew that the pits in the UK would close before there were the strikes and all the publicity and yet the companies and the governments could not muster the political will or the votes to invest in reskilling fast enough. And so, I think even through the industrial age you've had this problem where companies - and it includes SMEs and governments know a change is going to happen, but then don't do anything until they fall off a cliff edge.  I think when we're thinking about some of the technologies of the past, they're kind of relatively slow impact. We’ve moved from molecules to electrons and where you have electrons, you have software, and you have semiconductors, and you have lasers, and you have all sorts of things like that. And now with AI, it's an accelerant of all those things - not just LLM generation which also has an impact on other creative industries, generating pictures, generating text, etc. It's an accelerant now because we also have disaggregation. So, there's this confluence of technology with big societal tsunamis of climate change, inequality and national politics and fear, which is a bit of a hotbed. And usually in these hotbed moments, the last thing people really take into account is skills.


Do you think that it could be said that businesses are investing in AI as a piece of technology but not actually bringing their business strategy, their business culture and their L&D agendas along?

You can adopt technology, but there are massive risks if you don't have the skills to adopt it properly in every role, not just in technical roles. And it's very easy for senior people who might be slightly scared about it to just think of AI as a capital expense. A, “let's just work with the incumbent suppliers we have and use their software” without actually thinking about who's going to be using it and what is the business and the culture that needs to change and what are the processes that need to change in order for the business to capture value from it properly. That's all about skills and people and teams. The thing that people may have not thought about from a skills perspective is security because you need to be equipping your teams properly to understand the risks of having your data processed by someone outside your company and their incentives. Other things like, have you got on premises skills to manage your own data infrastructure and is it of sufficient quality to work with the business outcomes that you want. If part of it is hybrid, have you got a way of that data going to the cloud and back again in a way that doesn’t open risks to your company. The other thing is encryption. For example, you defend your company network on the premise that you don't want people to break in. But, if they've already broken in and already got all your data, and they're waiting for a computing power to progress to a stage where it can decrypt all the information they now have about you and your company, then putting walls in is kind of pointless. I wonder whether beyond compliance, legal and security teams, if senior leaders understand the importance of skills in this regard. And it’s more about having that culture of people being able to learn and develop skills and the agility around the business to create capacity and skill sets. Not thinking about workforce reskilling in every single job is kind of negligent if you think about it in that context, it's like why would you not do that?

Something that we’ve been thinking about is how there's so much casual AI usage in  workplaces that goes under the radar.
But what you're saying on that data point is that there is a case for developing proper AI strategies where businesses do invest in having a set tool or their own LLM?

I hadn't thought of a question like that but I think this happens with every new technology. I remember in my younger days I worked for a Bank and they didn't want anyone to connect to the Internet. Sure enough, one of their departments, set up a little Exchange Server in the corner with a dial up connection and a completely open mail server and that's what happens, right? If you're not on top of it, someone else will be. You can't just say “you can't use this new technology” because at the same time you're telling your employees to make money and profit by being competitive in the marketplace. You're opening yourself up to complete confusion in the workforce as well as massive risk. Often new technology takes a while to come onto the risk register and the compliance and legal and risk teams who've been around for 15 years or 20 years are the ones left to deal with any issues. So I think there's a great education exercise with those teams but also more broadly across the workforce. Then we have the security points around AI, around data, but also around whether AI is getting the right answer and how we validate it. If you're thinking of it as a quick way of getting stuff done then, yes, it’s a great tool. But then what's your metacognition around doing this? What are you learning and how are you developing and are you learning how to use AI with best practice? Is this information being shared amongst colleagues? All of this should incentivise companies to make sure people are exchanging knowledge rather than pretending technology use is not happening. It's a, “well they're going to use it but then let's think about how we use it”. That is going to be an emerging trend now just the way we use Excel or whatever else. It's just going to emerge with a different kind of interface. All this needs to be supported by re-skilling. You can't just expect people to get on with it themselves.

A lot of business leaders are talking about what their companies are doing and of course taking different slants with their messages. For example, Procter and Gamble’s, CIO is pushing for AI to be removed from the domain of the tech teams with everybody needing to do some training in AI, turning it into a culture change piece not purely a tech change piece.

I think that's so true and there are great leaders who get this. I was with a friend of mine and we were downloading large language models and seeing what they could do together -  and that's what people should be doing. If you're going to make a strategy about AI, you should know what you're talking about as you can't get the sense of it or the nuance without actually having a go and trying it because it is an emerging technology. If you're not playing with it, then you've got some real problems ahead of you because you're going to make some bad decisions. As a senior leader, you can’t be just reading documents, you need to be having a go and then discussing your findings with others. The other danger area is procurement. If you're a procurement department and a VC backed tech company which is going to grow like billy-o is selling you stuff, how do you know whether a product is good and secure with the longevity to stay that way. You need to be able to ask the right questions and again, that's a reskilling job.


There's also something to be said about not outsourcing all of your upskilling and even using outsourced experts to upskill your internal L&D teams who can then be mentors to the wider business.

I love the Champions model which is the right way to do this sort of tech-oriented culture transformation. I was involved in Barclays Digital Eagles and they did a brilliant job because they made this model a thing separate from hierarchy. It's people who really want to learn, not necessarily experts, but with a commitment to learning and subsequently teaching. And that really helps make L&D a cultural exercise. You get exec training days or people going off to Cambridge Judge School to get a certificate but that does not lead to outcomes of widespread transformation, A Cambridge Judge Certificate is all well and good but all of these models of learning and training have their place.


Tell us a little about the work you're doing with Makers as that's all about upskilling and launching people's tech careers.

Anything in skills and improving quality is something I am really big on and quality really matters when you're going through any sort of tech change. Quality for me is a combination of knowledge and mentality and mindset and behaviours. With Makers, they've got the highest achievement rates for their kind of apprenticeships. The question I ask is “out of the people who start, how many finish” and when the answer is “well actually everyone finishes but some of them get promoted so they don't finish, we have to get them back to do their certificate”. It's like, “OK, that's what you want to hear”. The other thing you're looking for with quality and part of the reason why I joined, is the mindset that they call LQ, which is not IQ or EQ. LQ is the ability to learn. And I think it's more than that. It’s the ability to learn and deploy; to learn and to train that muscle in your brain and to notice that you're learning and learn to discuss. And I think with AI that's important. The LQ is potentially more important than knowing the syntax or a programming language. It’s about learning, deploying, and being able to communicate to upskill others and Makers are teaching this as part of their curriculum and that's why I joined. If you're a company, the ROI on a 40% achievement rate of people who might not have LQ versus an 80% who do in a security department or a finance department is incredible. I joined as their Chairman for that exceptional ROI, that's what I love about them. I think organisations like Makers are critical to help corporates but actually it needs to be across every job - you shouldn't just make changes in the tech department.

What about Freeformers, an organisation in a different space but very much about improving culture, improving talent. Is that AI discussion part of it?

I think Freeformers was too early as a business and I set it up a very long time ago. It was based around exactly that premise of exponential technology’s impact on organisations and the dangers of inequality that it causes. That's why I set it up and we had a “one to one for one” model. So we re-skilled every part of an organisation, but for every person we did our workshops with, we helped a disadvantaged young person for free. And then some of those people became our trainers for the corporate workshops. And then they got recruited. So that was a kind of one for one model trying to break this inequality problem caused by exponential technology. With exponential technology there are people who are more confident and have more resources to exploit it, and then people who are less confident and with less resources who get exploited and that’s been seen through history and it's not particularly sustainable. You always end in big conflicts when that sort of stuff happens whether it's inside your organisation or between countries. I think there is a fundamental point around any sort of exponential technology including specifically AI, which is if you take a task centred view on economics, it’s about reducing the cost of doing a task and with technology the cost of doing a task can be cheaper than the cost of living for a human. There's so many revolutions through history which link to this fact.  The people-centred economists were saying “well what if you look at the world differently, with technology which is more people centred” which is “how do we use technology to increase the value of humans and their interactions with work and with each other”. Then you get a productivity culture transformation. So, I think with AI it's about how you create a people centred organisation which builds on a mission to use technology to increase the value of people in your organisation and increase the value of their interactions with other people and other people’s work. That is very different to reducing the cost of doing a task.

©2024 Moloney Search
Registered in England and Wales
Company number 2988033
VAT registration number 649 8968 51

Privacy notice
design and development