News & Insight
Bristol talks tech
n Thursday 6 December 2018, HLaw headed to Bristech, one of Bristol’s major annual tech events. Hosted at the Watershed, a cosy cinema on the Avon’s banks, Bristech Conference 2018 brought together Bristol’s active tech ecosystem. In addition to a conference with a rich programme comprising 18 talks across four cinema theatres to more than 500 eager participants, there was also the ‘Silicon Gorge’ closed-door pitching event for startups looking to raise their next round of growth capital.
The crowd was a mix of start-ups looking for advice, inspiring ideas and/or funding, SMEs and bigger corporations active in the Tech sector, and venture capitalists in search of the next idea to back.
Bristech first started as meetups in Bristol: tech enthusiasts, experts or novices, meeting around coffee and snacks to discuss experiences, problematics or brilliant ideas. The organisation now holds around 10 meetups a year, where they usually invite speakers to share their knowledge on topics from across the technical spectrum, including professional developers, company founders, academic researchers, hobbyists and creatives. Bristech counts now more than 2,000 members.
In addition to those regular meetups, Bristech holds its major event – the annual conference – in December.
Talks spanned from augmented reality, computer vision, system resilience or security, to women in tech.
Having written my LLM thesis on self-driving cars and having a keen interest for all kinds of AI (artificial intelligence) technologies, amongst those 18 talks, one particularly caught my attention: ‘Holistic ethical machines’ by Ben Byford. This somewhat short talk centred around the importance of developing some kind of ethical standard for AI-powered machines. To my disappointment, it stayed very general, talking about how any company designing AI-powered machine should ask itself the right ethics questions and try to turn those into code to be incorporated into the relevant algorithms.
The most interesting part of the talk – as is often the case at those events – came with questions and crowd interaction.
The over-hyped ‘trolley problem’ came up of course. However, Ben was quite reluctant to engage with this issue as he believed the question to be beyond topic. The trolley problem is concerned with the one possible end result: where the code has failed and responsibility must be assigned most likely by law. Ben strongly believes – and I must say I share his view on that point – that ethics is something one cannot really turn into laws, and should be embedded in one’s mind so much so that one would innately refuse to design a machine which doesn’t operate by the same set of rules as one does.
Some in the audience disagreed, adopting the more sceptic opinion that people do not do anything without being “forced” by some kind of regulation. Surely this theory has been proven wrong in numerous instances; the death penalty does not decrease the crime rate.
Perhaps more interesting questions followed, centred around the ethics spectrum and which country or individuals will decide for the rest of the world how ethics is understood and applied to AI technologies. Those who design AI-powered machines will likely be the ones defining what is ethical and what is not. And ethics vary greatly across countries or regions of the world. Because some countries are more active in AI technologies than others, will those countries be ultimately able to impose their take on ethics on the others? Because machines’ designers will most likely be commissioned by wealthy individuals or corporations, will a small minority in the end dictate what ethical means to everyone else?
Is your curiosity piqued yet?
I left the conference having learned a few new things, but also with new questions and eager for more answers.
Looking forward to attending next year!