Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Introducing BrainSafe - Design that Respects your Cognition image

Introducing BrainSafe - Design that Respects your Cognition

The Dopamine Slot Machine
Avatar
52 Plays2 days ago

The UK government has finally named the problem — Keir Starmer's crackdown on infinite scroll and autoplay is the most concrete UK action on addictive design we've ever seen. But naming features is whack-a-mole: ban infinite scroll today, and the platforms ship something functionally equivalent under a different name tomorrow. 

In this episode, Andrew walks through what the government's getting right, what's missing, and introduces the BrainSafe Standard — a six-dimensional certification framework that asks the structural question rather than the feature-by-feature question. We cover the six dimensions of the BrainSafe Addictive Interface Assessment Framework, share scores from worked assessments of Snapchat, Netflix, Times Tables Rock Stars, and Signal, and explain how businesses can join the 2026 pilot programme. Find the framework and pilot details at https://brainsafestandard.co.uk/, and follow along as we unpack each dimension in depth on LinkedIn at https://www.linkedin.com/company/brainsafestandard/.

Recommended
Transcript

Intro

Introduction to 'Dopamine Slot Machine' and Technology's Impact on Kids

00:00:12
Andrew Wilmot
Good morning, good day, good evening. Whenever you are, welcome to the dopamine slot machine. the podcast discusses what do you need to know about the technology your children are using. How is it designed to get your kids hooked? How does it make money from your children? And what can you do to make sure that your child's relationship with tech is a positive one? My name is Andrew. I'm a dad of two and a lifelong gamer. And today's episode is something I'm super excited to introduce.
00:00:34
Andrew Wilmot
But first, i want to cover some of the news the last couple of weeks, because that really sets up what we're introducing, which called BrainSafe.

UK Government's Plan to Tackle Addictive Social Media Features

00:00:42
Andrew Wilmot
And yeah, let's get into it. So firstly, huge week in news on addictive design. a little while ago, Keir Starmer announced that the government is going to be cracking down on the addictive elements of social media.
00:00:56
Andrew Wilmot
And the language he's used is genuinely striking because the first time ever we've got a UK prime minister naming design mechanics. Not harmful content in the abstract or online harms as some sort of vague category.
00:01:09
Andrew Wilmot
Kirstarman named Infinite Scroll and he named Auto Playing Videos. and He talked in plain English about features that keep children hooked on their screens for hours at a time. The running of consultation as well, the government opened that in March.
00:01:21
Andrew Wilmot
But this week, right, they have announced that they will be implementing some restrictions.

Challenges in Regulating Social Media Design

00:01:29
Andrew Wilmot
OK, that doesn't sound like news, more just sort of a holding position.
00:01:34
Andrew Wilmot
But progress is being made here. The mechanisms Stammer's naming are exactly the sort of mechanisms I've been talking about, but not celebrating completely unreservedly here. And there's some real problems with how this is being approached.
00:01:48
Andrew Wilmot
I want to walk you through three of them before we get to what we're trying to do about it. So the first problem, and this is the big one, is that naming specific features is a form of regulation that has a structural weakness baked into it.
00:02:00
Andrew Wilmot
The moment you outlaw infinite scroll, the next product that gets shipped won't have an infinite scroll. It'll have something functionally equivalent under a different name. Now, what does that look like in practice? Well, it could be a feed that loads 10 posts, then gives you a tap to continue button, with that button being pre-highlighted before your finger even gets there.
00:02:18
Andrew Wilmot
Is that infinite scroll? Technically no, functionally yes. How about just a scroll that lets you watch a thousand videos before it prompts you to go to the next page.
00:02:29
Andrew Wilmot
Is that infinite scroll? Again, technically no, functionally yes. Or feed that ends after 50 posts but then used as a personalized, you might also like carousel that's another 50 posts. Or feed that does genuinely end but the absence of push notification 30 seconds later saying your friend just posted and now you're back in.
00:02:47
Andrew Wilmot
There are an unlimited number of ways to engineer continuous engagement and Infinite Scroll is just one of them. It's just the most visible, most well known one at this point.
00:02:58
Andrew Wilmot
So this is a sort of catalogue approach to dark patterns. It has a fundamental limit, which is that you can only catalogue what you've already seen, and you can't catalogue what hasn't been built yet. For a regulator, that means you're permanently one step behind. You can only tackle dark patterns that have been invented. It's whack-a-mole.
00:03:15
Andrew Wilmot
But the mole has a multi-billion dollar industry with full-time behavioural psychologists and staff whose entire job it is to ship the next mole. second issue i want to bring up is Australia, because Starmer has explicitly looked at the Australian model.
00:03:28
Andrew Wilmot
Australia, really cool test case, really useful test case, they've banned under 16 from a number of social media platforms entirely from December last year. And kids are getting around it with VPNs with fake accounts, borrowing adult logins.
00:03:43
Andrew Wilmot
And the underlying products haven't changed the same mechanics, the same business models, the same interfaces, the age gate doesn't fix the design, it's just delayed the harm to children to when they're a bit older. Now, I do want to clarify and say that in terms of the number of children who have access to social media, the austria the measurements that Australia has taken has been a massive success. right Of children who had social media before, only 70% of them still have access to at least one social media platform.

Broader Focus Needed Beyond Social Media for Effective Regulation

00:04:14
Andrew Wilmot
The third issue, and we haven't talked about this much yet, is that the consultation is focused on social media, but the same mechanics are being used all over the place. Games, streaming services, ed tech, even if you use Vinted, that's got infinite scroll baked in.
00:04:29
Andrew Wilmot
Take Timestable Rockstars, which is a maths app aimed at children. used in primary schools across the UK. My daughter uses it. It's everywhere. We've assessed it as part of a works example for the Brain Safe Framework, which we'll get into a minute, and it scored high for compulsive design due to the competitive ranking system, the speed-based gamification. There are some real concerns there.
00:04:50
Andrew Wilmot
But it's not social media, and therefore it's not in the scope of this consultation. The child who's doing 40 minutes a night on time take walk, so skip the school ranking up, is being subjected to design pressures that adults wouldn't accept in a workplace.
00:05:03
Andrew Wilmot
And none of what Starmer is proposing addresses that. So just to summarize where we are from this week, the government is doing something. It's the most concrete yeah UK action on design mechanics we've ever seen.
00:05:15
Andrew Wilmot
But naming features is going to create a regulatory whack-a-mole dynamic. The age limits don't fix the underlying products and the scope is too narrow to catch the border category of problems. And so we need a different approach.

Introduction to the BrainSafe Framework

00:05:28
Andrew Wilmot
We need an approach that doesn't just ask, is this product using a banned feature? but rather does this product however it's designed to exhibit the structural properties that produce compulsive engagement that's a different question and that's a question that myself and a couple of colleagues have tried to answer so we've built something called the brain safe framework i'll give you the elevator pitch it's a certification standard for digital products in a similar way to how fair trade in theory means that if you're buying a fair trade product, you know that it is delivering a fair deal to the farmers or the whole supply chain.
00:06:07
Andrew Wilmot
We've created a business certification system for digital products that respect the user's cognition.
00:06:17
Andrew Wilmot
It's a voluntary, independent third party assessment that tells consumers, regulators and the businesses themselves whether the product is structurally, in terms of compulsive design, safe to use.
00:06:28
Andrew Wilmot
It's built on a methodology called the Brain Safe Addictive Interface Assessment Framework, or the BAIAF for short, which is what I'll be calling it for the rest of this episode. And it evaluates a project product across six dimensions of addictive potential, with each dimension then being scored from one to five.
00:06:44
Andrew Wilmot
This score then combines into a severity classification, which then maps onto to one of four tiers. Brain Safe Certified, Certifiable with Conditions, Under Review, or Not Brain Safe.
00:06:56
Andrew Wilmot
And the critical thing back to the whack-a-mole problem we were just discussing is that the BAIAF doesn't ask whether product uses the name to bad feature. It's completely agnostic to any sort of dark pattern list.
00:07:08
Andrew Wilmot
It asks whether the product has the underlying structural properties that produce compulsive engagement. That's a key design choice. The whole framework hinges on it. Existing dark pattern taxonomies ask, does this interface contain a known deceptive pattern?
00:07:21
Andrew Wilmot
We ask, does this interface produce compulsive engagement? And that distinction means the framework can evaluate a feature that hasn't even been invented yet by designers who haven't even entered the industry yet.
00:07:33
Andrew Wilmot
We want to catch the next infinite scroll before it is made. So the six dimensions, let me run you through them. And as I do, you can sort of picture this as a thermometer scoring one to five, where one is this product fine, five is critical, this is genuinely concerning. And the product gets a score on each dimension and the scores combined.
00:07:51
Andrew Wilmot
So the first is autonomy analysis. This question asks, can the user make free informed decisions about their engagement with this product? Do they understand what's being done to them? Are the defaults neutral? Do they tilt towards engagement?
00:08:03
Andrew Wilmot
Is the way out of the product as obvious as the way in? If a product is opaque about its algorithm, has defaults that maximize engagement out of the box and makes account deletion, or or even just exiting the app, a multi-step deal, then it's going to rack up a score here.
00:08:18
Andrew Wilmot
Everything else from the assessment assumes this, because if a user doesn't have the basic information they need to make a free choice, then asking whether they're choosing to keep using the product is a meaningless question. They can't be choosing. the Conditions for choice have been removed.
00:08:32
Andrew Wilmot
Dimension two, variable reward profiling or the slot machine dimension, the one that we got the name for this podcast from. Does the product engineer unpredictability to sustain engagement? Our reward schedules, that being the timing of what happens and the value. Keep in mind, value can mean a wide array of things. It's not just monetary value designed to create that anticipatory dopamine hit.
00:08:55
Andrew Wilmot
The reference point here is genuinely a slot machine. The same variable ratio reinforcement schedule that gambling regulators spent decades writing rules around is being used in products that we all use.
00:09:08
Andrew Wilmot
When you pull down to refresh a feed, you don't know what's going to happen. You don't know if you're going to get content that is engaging. its There was an element of randomness to it of unpredictability.
00:09:19
Andrew Wilmot
Equally, when you pull a slot machine handle, you don't know what's going to happen apart from the slots going to spin. The mechanism is the same, but the regulation is not in the slot machine that uses a variable water sustained play in the casino has to comply with comprehensive licensing, age gating, harm minimization requirements, whereas a social media feed that uses variable water sustained scrolling in a teenager's bedroom does not.
00:09:41
Andrew Wilmot
That gap is one of the things that BrainSafe is trying to address. If the engagement loop of the product could, without meaningful alteration, be described as a slot machine, so variable input, variable outcome, reward optimized for anticipatory activation rather than delivered value, then it fails this dimension.
00:09:59
Andrew Wilmot
And a surprising number of products fail it. I mentioned Vinted earlier. Vinted fails on this. based the The more you scroll through Vinted, like ultimately, you're looking for that reward of finding something that you're interested in at a good price.
00:10:13
Andrew Wilmot
The next dimension, friction asymmetry mapping. Is it harder to leave than to exit? Are exit paths as clear as entry paths? Is there a structural thumb on the scale that says once you're in, you're staying in? Think about cancelling a gym membership, right? Think about deleting your account from a streaming service. Think about turning off notifications.
00:10:31
Andrew Wilmot
Compare these experiences to signing up to the same product. Sign up is one tap, cancellation is a phone call, an email confirmation, a calling off period. That's friction asymmetry. This is the dimension where regulators are already on the case with the FTC's click to cancel wall in the US is essentially working on this dimension.
00:10:51
Andrew Wilmot
The EU's Digital Services Act Article 25 covers it. The UK Children's Code's high privacy defaults provision is a provision on here, but adherence to the UK Children's Code is it's basically non-existent, right?
00:11:06
Andrew Wilmot
So of all the six dimensions, this is one where regulation is the furthest along, but you still see friction asymmetry everywhere. A high score here means that there is going to be a degree of regulatory exposure.
00:11:18
Andrew Wilmot
The next dimension, dimension four, temporal boundary erosion. So that's that's a big word, but does the product remove natural stopping cues? Does it eliminate the moment at which a user would naturally pause and ask, do I actually want to keep doing this?
00:11:34
Andrew Wilmot
So you can compare, for instance, changing episodes when you had a VHS and having to change tapes and rewind and things like that versus auto-playing videos and auto-next on videos. but Netflix will let you watch three episodes of something before it asks if you're still watching. And that's without... it that resets if you so much as turn the volume up.
00:11:55
Andrew Wilmot
Infinite scroll is a real textbook example here. Changing from pagination where you see a set number of posts and then have to manually choose to see more to being able to scroll forever is classic temporal boundary erosion.
00:12:08
Andrew Wilmot
Autoplay, as mentioned, is another textbook example, but also the absence of session time indicators, the absence of pagination in general, the absence of a clear you've finished moment. So games that can never be completed, that are very round based, might fall into this.
00:12:23
Andrew Wilmot
Researchers have a name for this, they call it Time Fog, the designed absence of temporal cues. And what Time Fog produces is continuous engagement with no natural assessment point users described coming out of that and not knowing where the last hour went you've probably experienced that with social media so starmer's announcement regarding infinite scroll and autoplay is directly targeting this dimension but there's other ways to create temporal boundary erosion and addressing infinite scroll and autoplaying videos in social media doesn't
00:12:59
Andrew Wilmot
arrays temporal boundary erosion as a category and doesn't impact the use of this outside of social media. You can score highly on this without using either of the named UX mechanics that Keir Starmer mentioned.
00:13:15
Andrew Wilmot
So we need a framework that scores the category rather than the specific features. Dimension five, social obligation engineering. Does the product manufacture social pressures that compel engagement beyond organic desire?
00:13:27
Andrew Wilmot
Classic example here is the Snapchat streak. Two users have to exchange snaps every 24 hours or the streak resets to zero. You hear users report maintaining streaks people they barely speak to, sending blank photographs of ceilings just to keep the counter going.
00:13:43
Andrew Wilmot
I've seen peers do this, I've seen kids do this, picture the inside of a bag sent, street preserved. It's a really weird thing to watch when you stop and think of what's actually happening. And this obligation is artificial. It would not exist without the platform engineering it. There's no benefit outside of the Snapchat streak of sending each other blank photos.
00:14:02
Andrew Wilmot
And it's not just streaks, right? This corporate design choice to make children feel that just disengaging from the product harms their friends. It's not just streaks. Read receipts, typing indicators, location sharing, last active timestamps, best friend rankings. All of these manufacture social pressures that wouldn't exist without the platform's choice to manufacture them.
00:14:23
Andrew Wilmot
And there's basically no regulatory coverage of this. There's nothing in the current law that specifically addresses this dimension. So products can rack up a five on this dimension and not be in violation of any regulatory requirements.
00:14:36
Andrew Wilmot
And so this is where we're trying to do the most work that regulation hasn't caught up with. Something I want to touch on here is the intersection with AI, right? You might have heard the phrase, you know, we've had the attachment, sorry, we've had to the attention economy, but we're shifting towards the attachment economy where people form maladaptive attachments to AI products. That is a form of social obligation engineering.
00:14:59
Andrew Wilmot
And we've designed this dimension, not just trying to catch known examples of social obligation, not even trying to catch known examples of social interaction, but to capture future ones as well.

Evaluating Digital Products Using the BrainSafe Framework

00:15:13
Andrew Wilmot
The final dimension, dimension six, is vulnerability amplification. Are the mechanisms in dimensions one to five being deployed in populations are particularly susceptible? The most obvious example here is apps that the children use.
00:15:28
Andrew Wilmot
So, for instance, it's natural to expect that an app that adults use would have, you know, perhaps less in the way of safeguards than one exclusively used by children.
00:15:43
Andrew Wilmot
Now, it's not just children. So an example that I've given before when discussing the brain safe framework is if we if you discovered that your your social media platform overwhelmingly became used by members of vulnerable groups or marginalized groups to take a real life example tumblr right tumblr fits into that category is repeatedly cited as a as a digital third space that is used by autistic and LGBT youth then you would also score on vulnerability amplification and
00:16:17
Andrew Wilmot
Just to be clear here as well, you know vulnerability amplification is not necessarily in itself a bad thing. like If you design an app, if you design an EdTech app, and I can be quite critical of EdTech, but let's say you've designed one which any other measure is fantastic.
00:16:34
Andrew Wilmot
It's still going to score top marks in vulnerability amplification. Now, that's not going to be enough to push it out into any sort of problematic area, but it still needs to be recorded, right, that your users are children.
00:16:46
Andrew Wilmot
So, yeah, there's a substantial evidence base showing that adolescent reward circuitry responds more intensely to things like social validation cues than adult reward circuitry.
00:16:58
Andrew Wilmot
We know that the link between gambling adjacent mechanics and problem gambling is stronger in teenagers than in adults. Brains are still developing the regulation that lets adults override impulses.
00:17:09
Andrew Wilmot
So a mechanism that I find annoying, you know, I think my brain's finished developing, maybe. You know, that can be genuinely compulsive for a teenager.
00:17:20
Andrew Wilmot
So any framework that doesn't separately ask the vulnerability question is incomplete. And so we wait for that. product that scores moderate across dimensions one to five but targeted at children gets pulled into the higher cumulative scoring.
00:17:32
Andrew Wilmot
This question is not just does the product use these mechanisms, it's does this product use these mechanisms on the people most affected by them. So to recap, the six dimensions autonomy, variable reward, friction asymmetry, temporal boundary erosion, social obligation manufacturing and vulnerability amplification.
00:17:49
Andrew Wilmot
It's cumulatively scored mapped to compliance tiers. So we have actually gone through a couple of worked examples. So Snapchat scored in our first pass through. And to be clear, this is very high level pass through in the actual pilot program, which I'll talk about more in a second. We would need access to internal documents, methodologies, breakdown of algorithms used. And we don't have that with Snapchat. But apart from that, Snapchat scored 26 out of 30. That's critical.
00:18:19
Andrew Wilmot
We've got a number of red flag conditions based on combinations of high tiers. So for instance, a high vulnerability amplification score coupled with a high variable reward profiling score is an instant red flag.
00:18:33
Andrew Wilmot
yeah So ki going back to going back to Snapchat, all four of our automatic red flag conditions were triggered. So not brain safe. Netflix, for what it's worth, scored 16, so moderate with one red flag.
00:18:47
Andrew Wilmot
And that concern is concentrated in the autoplay. Now, we're not looking to assess content here. There is a variety of content on Netflix, some of which is a lot better for adults and and children than others. Some of the content itself is going to be more highly engaging.
00:19:05
Andrew Wilmot
we We're not assessing that mechanism. And in our full white paper, that is listed as one of the limitations of the framework. We simply can't assess all the content that is available on a platform with tens of thousands, hundreds of thousands of hours of content to be consumed.
00:19:28
Andrew Wilmot
Times table Rockstars scored 19, which is high, just about. The competitive ranking among children in a mound, true educational context is the big issue here. And Seiknoor.
00:19:38
Andrew Wilmot
Now, I would not let my children have Signal. And again, this is another one of the limitations of this framework, because we are only looking at the compulsivity potential of applications here. We're not looking at whether the app is fundamentally safe.
00:19:53
Andrew Wilmot
We're looking at products that respect users' cognition. Signal, the Messenger app, scored seven. No concerning findings. We would pass this as BrainSafe certified.
00:20:05
Andrew Wilmot
Barring... anything wild that would come up during the assessment process. Signal is not safe for children. It is an encrypted messaging platform where you contacted by strangers.
00:20:17
Andrew Wilmot
But it would still pass as brain safe.

Implementing the BrainSafe Framework: Pilot Program Announcement

00:20:20
Andrew Wilmot
So, yeah, so Signal is a proof of concept here. This isn't a framework where everyone fails. Products can serve their purpose without engineering compulsion and Signal demonstrates that.
00:20:30
Andrew Wilmot
We're not necessarily anti-engagement because You can engage users in a way that's positive for them. You just build build things that bring actual value to your users.
00:20:43
Andrew Wilmot
So we're not anti engagement, we're anti engineered compulsion. It's a difference and the framework can tell that difference. So mentioned briefly, we're running a pilot. And this is how we move from theory to practice. So we're running a pilot program.
00:20:57
Andrew Wilmot
The framework is still actually in progress with minor edits, but we've got, at this point, four businesses signed up to the pilot programme. We need to actually run assessments on real products in partnership with real businesses, and that's what we're doing.
00:21:13
Andrew Wilmot
So we've got four businesses on side, but we are still looking for businesses that would be good candidates for this. And we're not pretending we're going to get Meta or TikTok on board,
00:21:26
Andrew Wilmot
will come for you guys eventually. But small to medium businesses building consumer facing digital products who want to be on the right side of where this is going, who see the potential competitive advantage is to building products that respect users' cognition, who see advantages to being proactive when it comes to regulation,
00:21:46
Andrew Wilmot
there are three really good reasons for businesses to engage with us right now. So firstly, is that regulatory anticipation? Across the world, regulators are moving now. The UK consultation we discussed at the start of this is going to produce restrictions.
00:22:01
Andrew Wilmot
Months, years, not a huge amount of time. The EU's Digital Fairness Act is expected later this year. Wherever you operate, you're going to have to consider this. And if you are worldwide,
00:22:13
Andrew Wilmot
If you're a product with users worldwide, you're going to be needed consider the regulatory frameworks worldwide. The business that go through a structured assessment now, or the documentation, the design discipline and the governance habits that will make a compliance routine when the rules land, rather than doing it in a panic several months later.
00:22:31
Andrew Wilmot
The next reason is the competitive advantage. You can put the brain safe mark on your product if you pass the assessment. It's a market signal. It tells parents that someone independent has looked at the product and confirmed it isn't built like a slot machine. It tells users who, the the same users who might be looking at doing digital cleanses and such, that it aligns with their values. And this is going to matter more and more every year.
00:22:54
Andrew Wilmot
Look at the success of something like the YOTO players as just evidence of that. The market for children's digital products is increasingly distinguishing between products parents trust and products parents don't.
00:23:06
Andrew Wilmot
Getting BrainSafe certified is a way to be on the right side of that line. And the final reason is it does just make the product better. The exercises going through the framework forces a product team to articulate in plain English why they've made the design choices they have.
00:23:22
Andrew Wilmot
In our experience, a lot of design choices that have become normalized don't actually survive that analysis. Why is the cancellation flow seven screens? Because we don't want them to leave. Why is autoplay on by default? Because we want them to keep watching.
00:23:35
Andrew Wilmot
When you have to say it out loud, you often find that it doesn't actually deliver value to you as the business. And so that's the value of an external assessment. The assessor asks the questions that you stop asking yourself.
00:23:46
Andrew Wilmot
So end goal, where do i want to be in three years? Sell off sell off this entire initiative to TikTok or something so I can water it down and I'll put my feet up in a on a beach somewhere.
00:23:58
Andrew Wilmot
No, probably not that. If three years from now, the brain safe mark on a digital product means what fair trade does on chocolate, that signal that it's been independently verified, if it's something that parents are looking for, schools requiring it, procurement teams ask for it, regulators reference it, that would be amazing.
00:24:17
Andrew Wilmot
If this can become the shared language across industry, regulators, parents and consumers for talking about whether a product is structurally compulsion-inducive, that's the end goal. We're not there yet. We are right at the start. We're just launching this pilot.

Engagement and Feedback Opportunities with BrainSafe

00:24:32
Andrew Wilmot
The framework is published, the methodology is documented, and the pilot is open. So here's something that you can do to help here. if If you love this idea, if you wish you could distinguish between products that are brain safe and ones that are not, there's two things you can do.
00:24:46
Andrew Wilmot
The first is to follow Brain Safe on LinkedIn. Over the coming weeks and months, we're going to be unpacking the framework dimension by dimension with worked examples, case studies, and the underlying research. I haven't even mentioned the research. We've been working with academics on this, the God, the the the du the bleeding literature analysi the is so the literature review is an absolute tome in of itself.
00:25:10
Andrew Wilmot
We've got over 100 odd studies that we've gone through and referenced as part of this. Anyway, if you want to understand the stuff at a level where you can actually apply it to your own family's digital choices, to the products you choose, or even if you are a designer, the products you build, or to policy positions you advocate for,
00:25:30
Andrew Wilmot
It's going to be at that sort of depth and would love your engagement, comments, questions, pushback. Right. We're not pretending that we've nailed this on the very first go. Any sort of critique or issues that people can come up with is itself helpful.
00:25:45
Andrew Wilmot
We want to build something that stands up to rigorous dissection. We're going to be evolving it.
00:25:54
Andrew Wilmot
The most useful thing to do is engage with it. If you think we've got something wrong, tell us and that's how we're going to make it better. The second thing, and this one's more specific, is if you know a business that should be in this islet, put them in touch. We're looking for digital product businesses, building things that are consumed by children, families, or even general general consumer applications.
00:26:14
Andrew Wilmot
Anyone who'd benefit from confidence that what they're building isn't engineered to exploit their users. So again, edtech, family focused app, streaming, games. but I'd love to get some video games on this.
00:26:26
Andrew Wilmot
Productivity tools, anything there. They don't have to be perfect. We're not looking to just do a, oh, you're already certifiable exercise. It's an assessment. There might be a step to remediation. That's a normal outcome and it's a useful one.
00:26:39
Andrew Wilmot
businesses that get the most in the pile to the ones that go in with their hands open here show us the product show us the design choices show us the bits you're not sure about we'll help you figure it out with our the the team of assessors we've got have decades of expertise between them you don't have to have it all figured out before you talk to us you talk to us so you can figure it out yeah so linkedin Follow BrainSafe pilot program, get in touch. Oh, we will be running the pilot program for free and recertification for members of the pilot program will be free for life.
00:27:12
Andrew Wilmot
So there is a financial incentive to to sign up, to be one of the first to sign up. But yeah, follow BrainSafe on LinkedIn. If you know anybody who would be a good fit for the pilot, put them in touch.
00:27:24
Andrew Wilmot
And that'll help us push this thing from a published framework into something that's actually changing how products get built. And so, yeah, the government's doing something, but it's not going to be enough on its own. And the shape of what comes next depends on whether there's credible alternatives.
00:27:41
Andrew Wilmot
If we can build a parent-friendly, business-friendly, evidence-based alternative that asks the structural questions rather than beginning a game of whack-a-mole, then that's what we need to do.
00:27:52
Andrew Wilmot
And I think that's brain safe. I'm very proud of this. I've put a lot of effort into it. It's been the summation of all of my obsession over the way we build digital products and the way digital products are built for our children.
00:28:07
Andrew Wilmot
I'm excited about this. I think this is going to work.
00:28:10
Andrew Wilmot
But That is all we have time for today. Thank you so much for joining me. Don't forget that if you've got some questions for us, or if you or your children have been impacted by the issues we've discussed today, if you do have any children who are going to launch pilots on digital products, one very impressive to definitely put them in touch.
00:28:27
Andrew Wilmot
Please do get in touch on the website, to the dopamine slot machine.co.uk. Alternatively, If you want to get in touch about BrainSafe specifically, then the website for that is brain safestandard.co.uk or you can find BrainSafe LinkedIn.
00:28:44
Andrew Wilmot
This has been the dopamine slot machine. Thank you and see you soon.

Outro