Powered By:

hsbcinnovationbanking logo

Tech Ethics 101: Planning For Unintended Uses Of Your Technology

Dama Sathianathan

Bethnal Green Ventures

Powered By:

hsbcinnovationbanking logo

Tech Ethics 101: Planning For Unintended Uses Of Your Technology

Dama Sathianathan

|

Bethnal Green Ventures

Watch this episode on SpotifyWatch onListen on YouTube
Dama Sathianathan
Full transcript here

About Dama Sathianathan

Episode 149: In LAB #36, Amardeep Parmar from The BAE HQ, welcomes Dama Sathianathan, Senior Partner at Bethnal Green Ventures.

In this podcast episode, Dama Sathianathan discusses the importance of tech ethics, unintended consequences of technology, and responsible product design.

Dama Sathianathan

Bethnal Green Ventures

Show Notes

00:00: Intro 

01:23 - When founders should start thinking about the effects and unintended consequences of their technology.

02:36 - Ethical codes and concerns to consider.

04:44 - Case studies on the negative impacts of technology on marginalised communities.

06:08 - Practical steps for building tech responsibly and considering unintended consequences.

07:51 - Iterative process of mitigating potential harms through workshops.

10:35 - Early-stage ethical considerations and investor assessments.

15:19 - Challenges in tech ethics with the rise of AI and available resources.

18:10 - Practical examples of AI ethics and investor due diligence.

22:20 - Founders' responsibility to do no harm and adapt to evolving ethical standards.

23:40 - Exciting developments in tech for good space and tackling misinformation.

Headline partner message

From the first time founders to the funds that back them, innovation needs different. HSBC Innovation Banking is proud to accelerate growth for tech and life science businesses, creating meaningful connections and opening up a world of opportunity for entrepreneurs and investors alike. Discover more at https://www.hsbcinnovationbanking.com/

Full video of episode

Watch this episode on SpotifyWatch onListen on YouTube

Dama Sathianathan Full Transcript

Dama Sathianathan: 0:00

Being acutely aware of sort of who gets to build and who gets to decide how to distribute technologies as well is an important factor in the sort of entire sort of supply chain of how you distribute your tech products and services essentially.

Amardeep Parmar: 0:19

Tech ethics 101 and planning for unintended uses of your technology. We're looking at when you need to think about these kind of problems. What kind of code of ethics should you be following? What resources are there out there to make sure that you're on the right side of the law? Also looking at what do investors look at in this situation In the world of AI, how can you make sure you're on the right side of history? And so so much more. To help us answer these questions, we have Dama Sathianathan , who is a S senior P partner at Bethnal Green Ventures. They're one of Europe's leading tech for good venture capital funds and they look at companies at the early stage to really accelerate those companies really making a difference. She's also a massive thought leader within this industry and is part of many different groups, such as Venture ESG, to make sure tech is working for us rather than against us. I'm Amar from the BAE HQ and this podcast is powered by HSBC Innovation Banking. So really important topic today all about the ethics around technology and I think it's something which people often get really excited about what they're doing, and maybe they just think about this a bit too late.

Amardeep Parmar: 1:23

So, from your experience, when should founders start thinking about what are the effects of their technology and the unintended consequences?

Dama Sathianathan: 1:29

Y eah, I mean it's a really good question because across sort of the early stage spectrum, you have very different opinions on when might be the most opportune time for founders to actually think about this. But as sort of very early, early, early stage investors in this space, we think it's a conversation that people should have within their teams from the get-go, even from the inception of their businesses. Essentially because the way you think about responsible product design in essence already starts with the people who are actually developing the tech products and services or who are also thinking about how it might be used by the people that they're trying to serve, b Because ultimately and this is to quote Shion M Rura on this if it's not diverse by design, it will be unequal by outcome. So the simple answer to what we would usually say in this is it really helps to sort of start building the sort of muscle memory to think about addressing some of those significant risks that could occur as companies scale from the from literally day one.

Amardeep Parmar: 2:33

W hat are some good things to be thinking about here? S o what types of ethical codes or like concerns that people might not realize that they should be thinking about? So what areas should they be looking into here?

Dama Sathianathan: 2:47

This is a topic that I love to talk about. It's actually something that we do as part of our programme as well, which is sort of sessioned around sort of impact risk and the unintended consequences, and we take it a bit back to sort of the ethics and the philosophy behind technologies as well, and the first sort of principle that we usually come back to is by a chap called Melvin Kranzberg, who wrote up the rules of technology, and one of the most pivotal ones that really accentuates why this is important is all about focusing that technology is neither good nor bad and it's definitely not neutral, and it basically relates to the sort of fallacy that technology is just a tool, um, but technology is actually neither produced or used in a social vacuum and it's. It's therefore also no less neutral than the people or society that actually produce and use it. So actually being acutely aware of sort of who gets to build and who gets to decide how to distribute technologies as well is an important factor in the sort of entire sort of supply chain of how you distribute your tech products and services essentially, and if we look at sort of tech ethics and codes, we used to bring up quite a lot of tech ethics and principles through the Tech for Good community. So there's quite a large proportion I think 10 principles or so that we've also published on our website many millions ago, all the different aspects, from the sort of provenance of technologies and how you might think about it to even how you increase the diversity of the people who actually get to develop those technologies as well. And what do you even do when you work with very marginalized and underserved communities to sort of ensure that there's no trade-off for them to be able to use X over Y?

Dama Sathianathan: 4:44

Because there have been loads and loads of case studies where very novel innovative technology that is largely untested and could be quite harmful has been imposed on to refugees, for example, because they were literally in a position where they had no other avenue than the support that was already provided.

Dama Sathianathan: 5:04

So facial recognition, biometrics in disaster humanitarian and disaster responses, for example, had a huge impact on and adverse effects on those people as well. So I think, yeah, when it comes to other sort of ethics codes, the sort of principles behind technology are good. A good shout, there's a, there's a. There's a friend of mine who um, who teaches at um, one of the uk universities around sort of tech ethics as well, and he wrote a really good blog on around sort of 10 um 10 particular technology rules as well. So that could sort of serve as a guiding principle or as a starting point which includes the sort of which includes sort of the readings and more research from social technologists like Kentaro Toyama, who wrote that technology can only amplify existing capacity intent back in 1980, and much, much more. So highly recommend, and we'll make sure that you can include this in your notes as well.

Amardeep Parmar: 6:08

So you mentioned a few really interesting things there and you mentioned about how one, about who builds it and the effect that has, but also how your product might affect different communities. And let's say, somebody is quite early stage and they're thinking about, okay, they want to build something because they're trying to build their MVP. Or they're just about, okay, they want to build something, they're trying to build that their mvp, or they're just past their mvp and they're trying to build like a. How do they try to do that in practice in terms of making sure that they don't have unintended consequences? Is it something could you do in phases, or what kind of element of testing or processes can help people in this moment in time?

Dama Sathianathan: 6:43

Yeah, so um, what, for example, is an open source tool that's licensed under Creative Commons so anyone can use it. It's not just Facebook to B2B, which is great, but we use it with companies at pre-product, pre-revenue stage, so it's very early and can be adapted throughout the lifecycle of a company as well. But essentially it's the unintended consequence scanning workshop that was developed by everyone and it lives on the sort of tech transformed website from the Ada Lovelace Institute. But that's essentially just a blueprint for how you think about to actually understand what are intended and unintended consequences. You think about to actually understand what are intended and unintended consequences, and it gives you sort of conversation prompts to actually bring in your entire team to have these conversations around. What do we actually want to do with a particular product feature or a particular product development cycle? And that's sort of the way you can also integrate it into your product development cycle time and time again as you grow and scale and get to maturity as of a Series A company or beyond.

Dama Sathianathan: 7:51

This is sort of an iterative process whenever you want to think about, whenever you need the sort of assurance that you have actually mitigated any potential harms as well, and the unintended consequence getting workshop.

Dama Sathianathan: 8:03

Whenever we do this with our founders, quite often they're usually faced with oh my god, there's like so many things I have not thought about from like the environmental impact of actually thinking about do I really need to build this technology because it's using so much water and energy to actually just run a large language model to um people understanding that the the potential external risk for someone misusing their wearable technology for surveillance would have been something that they would have never considered if it wasn't for that workshop.

Dama Sathianathan: 8:39

So it's a sort of good starting point to actually think about these conversations and anticipate what future risk might occur and actively sort of think about the strategies that you can bring in to sort of mitigate those. That's not to say that every single unintended consequence you know if someone is building a social media network and that it could potentially bring about the demise of a democratic society. It's not necessarily something that anyone can spot on day one, but because there are so many examples of tech being used for bad, for nefarious or malicious purposes or for unintended bad purposes as well, because the consequence has just been that people didn't anticipate those risks, that just really highlights that we do have a responsibility to really think about where our own ownership and agency lies and actually mitigating those risks as well. Yeah, there's so many examples that we've gathered from tech companies where technology has caused significant harm to people, but doing this exercise actually helps people to think about.

Dama Sathianathan: 9:48

So, now that I'm aware, what sort of guardrails can I put in place that just ensures that these harms won't happen and, in turn, just increases the sort of trust between their potential users and between the different stakeholders as part of their journey that this is a good product to use.

Amardeep Parmar: 10:07

O ne of the examples that comes to mind, as you're saying that as well, is the apple air tags, where obviously the idea is like oh so you don't lose something, but it also can be used to stalk somebody, and there's all these other things that obviously you don't want it to be useful, but it's like, how do you control that? As well, and I said it's if you just put the technology out there, once it's out there, that it can be used for these bad reasons, right, so it makes a lot of sense what you're saying about. The earlier you can think about it, then it's before pandora's box is opened, right? A nd even when it comes to, for example, the accessibility aspects is. So from a tech background, is that I know people inform on on the websites is about the text, fo r example, to make sure that if people who are sight impaired can see, can be able to hear what an image is, for example, and a contrast between colors. I think that's another one that a lot of people just forget that if you have very good eyesight, you might not realize. Actually, a lot of people can't really read what's on your page and it's like a really frustrating experience read what's on your page and it's like a really frustrating experience. So that kind of element, too, is something which it's, it can be. It's quite difficult for sometimes people because there's especially entrepreneurs. They're so wrapped up in what they're doing, but all of these different things about the unintended consequences, you just don't want to put something out there which then causes harm that you didn't intend, obviously, and one of the things that from your perspective as an investor is that when you're looking to invest in these kind of companies or somebody is trying to attract, how can somebody think about these consequences as well? Because I guess even when they're trying to apply for funding from you or to join your accelerator. You're going to be assessing them based on this too, right of, are they a company which is fulfilling or doing something for good? So if you're at the earlier stage, okay, I want funding from B bethnal G green.

Amardeep Parmar: 11:48

I want funding from somewhere else how do I prove to them that what I'm doing is more good than bad?

Dama Sathianathan: 11:58

Yeah, that's always quite hard to prove at the early stages, right, um, because it feels like it's more art than a science, especially when people are just about deploying their mvps and prototypes. But I think for us it's mostly a combination and I guess that's that's the case for more impact investors in the space as well. It's really behind assessing people's intention to wanting to do this as well, because, um and that's a slight sort of conundrum as well, because we do also say, just by virtue of doing good doesn't mean that you are good, um, which then sort of extends to the ESG lens of this so how do you actually think about operating responsibly as well? And that sort of goes beyond your tech product or service. That goes into your governance structure as a company, the way you treat your employees, the way you interact with different stakeholders and think about your wider impact on people and planet. That goes beyond the sort of the outcomes that you might want to achieve with your product. That goes beyond the outcomes that you might want to achieve with your product. And there's a few ESG VC groups knocking about I'm involved with Venture ESG, for example, who have really great tools available for VCs and LPs to actually do a lot more DD and help them assess the maturity of companies. When it comes to asking those sort of hard, fundamental questions on how you operate as a company In terms of what founders can do, I think there's fundamentally a few sort of areas that always make sense to sort of prep for and also include in data rooms, for example, which is basically like if you've done a consequence scanning workshop at the very earliest stages, for example, because it's very easy to run them within your own company, because the workshop guidelines actually give you a sort of facilitation guide on how you might run this workshop yourself.

Dama Sathianathan: 13:51

So it's definitely worthwhile checking out.

Dama Sathianathan: 13:54

But it basically also provides you with a few tools where you can just try to track what types of consequences, for example, you've listed and what type of strategies you might be able to deploy, because ultimately you can't address everything, because there might be a whole universe of issues that you need to tackle and there's only so much a founder can do.

Dama Sathianathan: 14:17

But as long as you have the awareness and as long as you can showcase to us that you have gone through like a prioritization matrix and actually identified. These are quite high risk, high impact consequences that we absolutely need to mitigate, then that's enough sort of assurance for us to make an investment into you, and also enough assurance for us that you are not only thinking about this, but you're also incredibly receptive to feedback and incredibly humble about the shortcomings of the things that you might be building, but super eager to tackle those as well, which is exactly the right pathway for founders, because I do. I do fundamentally believe that intent is incredibly important, but it isn't enough. So being able to tap into resources and founders, tapping into investors who can provide you with a lot of value and actually helping you, support you on this journey, is an incredibly smart feat of founders as well.

Amardeep Parmar: 15:19

We hope you're enjoying the episode so far. We just want to give a quick shout out to our headline partners, HSBC Innovation Banking. One of the biggest challenges for so many startups is finding the right bank to support them, because you might start off and try to use a traditional bank, but they don't understand what you're doing. You're just talking to an AI assistant or you're talking to somebody who doesn't really understand what it is you've been trying to do. HSBC have got the team they've built out over years to make sure they understand what you're doing. They've got the deep sector expertise and they can help connect you with the right people to make your dreams come true. So if you want to learn more, check out hsbcinnovationbanking. com. Obviously, what's happened in the last year or two is that things seem to have got a lot more complicated with the addition of AI, and you've been in this space for a significant amount of time. You've looked at so many different companies and how has that adjustment been from an investor perspective of now trying to work out this? Not new technology, but it seems to have moved very quickly and obviously regulators are trying to keep up today, founders trying to get today, investors trying to get up today. How should people be listening right now? Think about responsible AI.

Dama Sathianathan: 16:24

Yeah, Um, So for for the investors, there's quite a few sort of resources that are available um, which a lot of sort of collectives have now worked on as well. So the responsible AI labs, for example, which have sort of a protocol in a framework with sort of um, a few sort of guardrails and guidance around what you might do in order to assess companies effectively and what sort of questions you might be able to ask, might be asking your, your companies, in order to diligence the sort of responsibility approach in that aspect as well, and for founders on the flip side, there's to diligence the sort of responsibility approach in that aspect as well, and for founders on the flip side, there's also a few sort of resources available from them and from the likes of startups in society, which is a us-based organization, um, they're really really great and have um put together sort of guidance on this as well for people to think about. Okay, so what are, actually, Google also, who published a people and AI guidebook, which is, you know, it's quite funny to say that Google has published this, but they have. So the people and AI guidebook is a really great resource for founders, for example, who just want to try to even explore whether AI is actually needed or what the sort of differences could be between sort of actually automating a process versus deploying generative AI in order to build something new. And it's quite an interesting sort of founder resource, to be honest, because it does give you quite practical applications of how you might think about the different processes that actually require you to really question do I really need to build this or is this just a nice to have? And I think, generally when it comes to AI, the way we've sort of thought about asking questions and diligencing companies. So, to give you a very practical example, we get a lot of companies who basically say, oh, we're building this new algorithm, we can do X, Y, Z across sort of health tech, across sustainability and climate, across different domains, and we're kind of like like the fundamental question is always so, where does your training data come from? Because, ultimately, technology is not neutral. So who are the people who are feeding the data into this? Where is this data coming from? And particularly when it's health tech companies as well and they sort of have aspiration of saying, oh, we want to increase the sort of better health outcomes for people from ethnic minority backgrounds, great Ethnic minority backgrounds, data about people from specific backgrounds has always notoriously been hard to get by.

Dama Sathianathan: 19:15

So how do you plan on developing this if it's targeted a particular segment of the population that has notoriously been left behind?

Dama Sathianathan: 19:26

So these type of questions usually, and if people can answer those questions, even if it's to sort of say these are sort of aspects that we're thinking about. We don't have the answers to them yet, but we're doing this by, for example, engaging particular communities in our user research, actually building a panel around, guiding the sort of questions that we need to tackle when it comes to actually feeding more data into this, also paying people for a lot of the use of research that they are putting into this as well, particularly if it's traditionally marginalized communities. Those are sort of the types of answers that give us enough assurance as to they've got the right, they've got the brains and the heart in the right place to actually tackling these issues head on and are thinking about this and hopefully we'll be able to help them and steer them in the right direction as to how they might consider more work in this space, that they actually build tech products and services that can work and are adopted by the people that they're trying to serve sorry long-winded answer to this.

Amardeep Parmar: 20:28

No, no it's really interesting. So I'm just thinking about if like so are you? I think you've answered some of what I was going to ask. Next, where about if founders, uh, now think about themselves, where ethics are changing as well, because I guess that's one of the interesting things too is that what ethical code was, say, 10 years ago, and what the ethical standards were? Probably a lot less than what we consider today to be the correct standard and how can founders keep up with that? Because in 10 years time, in 10 years time, people might look at what we're saying right now and be like, oh, they're so backward because standards keep adjusting and changing with time. How can founders try to keep up with that? How can they keep up with that adjustment in? What's the latest way to make sure that you're being intentional and helping people in the right way?

Dama Sathianathan: 21:13

Yeah, I mean this is, I feel, like a very fundamental question, but also incredibly subjective, right. But because there's an answer to even what do we even mean by tech for good, and how can we, how can we be incredibly, um, prescriptive as to what outcomes are the best for whom, right, um, but I I do think that the sort of easy answer to this is basically taking a human rights-based approach, so really taking the lens of do no harm, and that extends to absolutely everyone um, always coming sort of always using that as sort of the moral compass as to what guides your actions, to basically just enforce that you do no harm with what you're building. And that's not to say that something might occur that you wouldn't have noticed. But we are human, after all, and we all make mistakes. But essentially, if that, if we take the sort of moral compass of doing no harm to people on planet, is sort of the baseline for us, then I think we'll be all right.

Amardeep Parmar: 22:20

And obviously you're involved in so many different initiatives in this tech for good space. What are you excited about? I mean, what do you think is coming out in the next few years or the next immediate, maybe even sooner than that, that is really exciting in the space. That kind of keeps you motivated and keeps you excited about what you're doing.

Dama Sathianathan: 22:39

Oh, that's a good question. Um, what keeps me motivated? Um, I think the easy answer is always that what keeps me motivated is that there are so many issues that just need tackling and there are so many issues that affect so many different people with so many different lived experiences lived experience that are very different to what I, where I come from, and I'm very excited about the potential of people from all walks of life being able to just be young, scrappy and hungry, essentially, and build something that could help make the world a better place. To be honest, I'm always quite fascinated In my job. I get to speak to so many founders on a regular basis about their ideas, about what they're trying to build as well, and it's a very novel sort of situation to always be surrounded by founders who are trying to build something for good.

Dama Sathianathan: 23:40

Sometimes there are a few sort of wacko ideas where you're literally like I mean, I'm sorry, this is surveillance technology that you're thinking about right now, even though you maybe want to, you know, reduce the sort of divorce rates in a particular country.

Dama Sathianathan: 23:55

You can't just build a wearable ring tracks where your partner is going for all the right reasons or whatever, um, but on the flip side, you also have very significant, amazing ideas of where you might, as an individual, feel quite jaded about the state of the world, um, but then get come across so many inspiring people just trying to do something about this as well, um, and I think in the next couple of years in particular, I feel like this year, particularly with sort of elections, um it's I mean there, there's so many.

Dama Sathianathan: 24:32

It's one of the biggest election years in the world, with so many people in so many countries, more than 30 countries being able to vote for the first time. Even, which is hugely fascinating of the past years has also just increased the level of disinformation and misinformation. So what I'm quite excited about is actually seeing the sort of proliferation of companies actually doing something about enabling people to learn and inform each other better, um, and to actually make more conscious decisions about how we consume different types of media as well. So even even sort of, for example, I've seen loads and loads of amazing stuff being built in civil society sector about attacking misinformation and disinformation, but I haven't seen much yet, um, in terms of actually for-profit companies trying to do this as well or trying to do this at scale, because it is a relatively nascent field, but I would love to see sort of more collaboration across different sectors in this domain and actually see more of this being done that truly helps us build a better society.

Amardeep Parmar: 25:45

So thanks so much for all of the insights you gave there. We're going to move on to a quick five questions now. So first question is who are three British Asians you think are doing incredible work and you'd love to shout them out?

Dama Sathianathan: 25:58

O okay, Okay, I'll shout out Srishti, who works at a climate fund. She's an amazing climate tech investor. I will shout out Drishdey, who is the founder of Texpert AI. I mean wonderful portfolio company, but she's an amazing human being and is trying to do something about a very crucial problem in the space. And Nina, Nina Mohanty, founder of Bloom Money, just because I also ran into her two days ago, so she's top of my mind at the moment.

Amardeep Parmar: 26:33

Awesome. And then if people want to find out more about you, find out more about Benthal Green Ventures, where can they go to?

Dama Sathianathan: 26:40

If they want to find out more about Benthal Green Ventures, they can go to benthalgreenventurescom. If they want to apply for funding because we're open for applications at the moment they can go to forward slash apply um and can get in touch anytime with us to either learn more about b2b in a QA session or um talk to anyone in the b2b team for 20 minutes about the idea. Um, if people want to find out more about me, uh, I hate L linkedin, but L linkedin is probably the best shot. Um, I am the only dama who works at bethlehem green ventures. Um, and uh I I will aim to get back to people on L linkedin, but, as I said, I really, really hate L linkedin. Um, so, if people do get in touch and I would highly encourage them to also just add a note to say why because it helps to sort of just filter through the noise.

Amardeep Parmar: 27:36

Awesome. And then is there anything that you need help with right now or Benthal Gree McQueen Ventures needs help with that? The audience could help you with.

Dama Sathianathan: 27:44

I mean we would love to have more Tech for Good founders from all walks of life apply. If that's you you, you want support and loads of capital please do get in touch. But generally I feel it we're always quite keen to sort of connect with people in the ecosystem who, even if you're not building a tech for good product yourself, um, but can help the founders in our community or can help anyone who is thinking about making a jump to the tech for good sector as well. Do just get in touch. Join our meetup community as well. We are meetupcom forward slash tech for good and, yeah, get involved and join. Join some of the events that we run as well.

Amardeep Parmar: 28:27

Amazing, so thanks so much for coming on today. Have you got any final words?

Dama Sathianathan: 28:30

I mean, uh, if, um, we've got heaps of resources available around sort of responsible product design as well, which we'll publish in the next coming weeks, so happy to sort of share a link to all of those resources. But if anyone also just needs help and support with actually running an unintended consequence scanning workshop for founders, very happy to do this because I think more people should should be able to, yeah, just build that sort of memory muscle to anticipate future risks as well. So I'm very happy to put my time forward for that.

Amardeep Parmar: 29:07

Thank you for watching. Don't forget to subscribe. See you next time.

Coming soon...

Other episodes you may enjoy: