S14 Episode 1: Angel Q - A Kid-First Super Browser // Tim Estes
Hosted by Hillary Wilkinson
“you want (your) kid to have a bounded amount of screen time”
~ Tim Estes, CEO AngelQ
Tim Estes is the founder and CEO of AngelQ a company that values the curiosity and innocence of children, fosters a culture of discovery and unity, inspires courage when facing challenges and remains steadfast in principle through all of their endeavors, and they do it using AI.
Interested in learning more?
Listen now!
Healthy Screen Habits Takeaway

Resources
For More Info:
https://www.angelq.ai/
Show Transcript
Hillary Wilkinson (00:03):
My guest today is the founder and CEO of a company that values the curiosity and innocence of children, fosters a culture of discovery and unity, inspires courage when facing challenges and remains steadfastly principle in all of their endeavors, and they do it using AI. So I will admit, I am a passionate advocate for children, as you know, as always. And my skepticism was a little bit peaked when I heard about, uh, this new app using AI to be a child-friendly browser. But today we get to hear all about it, and you may understand why I really feel that the work is being done to create a space online for kids to benefit from all of the knowledge held on the internet while being insulated from targeted advertising and nefarious tactics. Welcome to Healthy Screen Habits, Tim Estes.
Tim Estes (01:12):
Thanks, Hillary. Thanks for having me. Really appreciate it.
Hillary Wilkinson (01:15):
Tim, I definitely want to get into your company, AngelQ, but first, let's give folks some background on yourself. You've got serious street cred when it comes to online safety, prevention of child trafficking, and working with Thorn, right? Mm-hmm. Yeah. As one of the world's leading organizations defending children against sex trafficking and abuse. And you, you are very well-versed in this space of online safety, but what brought you to this digital wellness arena?
Tim Estes (01:50):
Yeah, so, uh, so I guess as a little background, all I've ever done is run companies. And so I've started my first, when I was 20 years old out of school, and, uh, it was an AI company in 2000, which was a lot less, you know, cool and hip than it is now. Um, and, uh, the truth was that the only parties that would really support it were the government after 9/11. And so essentially we were trying to connect the dots to prevent other massive tragedies, um, and basically find the bad guys, you know, ahead of the next potential attack. Hmm. Um, and so I spent a decade like in that problem, like in the sort of classified domain working with, um, you know, heroes. Uh, much like one of my co-founders, Josh, who was a Navy Seal Team six member, like amazing guy.
Tim Estes (02:33):
Um, and so like I, I spent a lot of time on that mission oriented use of technology.
Tim Estes (03:15):
So once we could catch bad guys that were, you know, quasi terrorist or military context, we moved into catching bad guys in banks, uh, that, you know, post financial crisis, like who were trying to manipulate the markets and potentially defraud like their investors or other things. And so we worked with a lot of the global banks, putting in systems that would monitor all that. Um, and right around the time of doing that, um, this really amazing group called Thorne came knocking at our door. So we worked with 'em and built this system, uh, called Spotlight, uh, and Spotlight became the most widely utilized, you know, anti-trafficking detection tool, probably like in law enforcement in the us. Uh, I think it's led to 10, 12,000 kids being identified online. Yeah.
Hillary Wilkinson (05:13):
The stats are amazing.
Tim Estes (05:15):
Yeah. It's, it's a huge crdit to Thorn. I mean, they just, they didn't have a development group and they needed someone to work with 'em on building that out. We, we led that for five years. Uh, they had immense success. But there are other problems I wanted to get into and try to help mm-hmm <affirmative>.
Tim Estes (05:52):
Um, and one of those hit me in the, uh, in February 22, 1 of my best friends sent me an article about a girl named Nyla Anderson mm-hmm <affirmative>. And, uh, those that have studied this area may know it, those that are watching like Section 230, uh, law and issues around, you know, the legal shield that these companies have abused for so long and still are, um, that case, the Anderson case is actually one of the great likes of hope because the appellate courts have actually said that TikTok was so negligent in what happened. And so,
Hillary Wilkinson (06:23):
Do you wanna explain what happened?
Tim Estes (06:25):
It's a tragic story. So Nyla was, uh, a 10-year-old girl, she spoke three languages, went to a charter school, you know, this is, this is a smart, you know, capable, like the kind of kid you want to have. Yeah. And TikTok was just silly dance videos, right. And so she was left un unmonitored on that, uh, while her mom was, I think at work. Um, and, uh, this sweet girl went to her mother's closet, hung herself up by her mother's purse and suffocated herself
Hillary Wilkinson (07:01):
After seeing the challenge,
Tim Estes (07:03):
Challenge following. Yeah. Basically just seeing the blackout challenge. Yeah. And this was served up by the for you algorithm? She never searched for it. Mm-hmm <affirmative>. And, and what makes it so despicable beyond the event is like, you know, Congress, others investigated this and like, they tried to say the video wasn't even on, like, on TikTok, which was a lie. They lied to Congress about this stuff. I mean, it's just shameless, like some of these companies. Um, and so anyway, so I saw this, and it was a same kind of anger is when Thorn like came knocking, you know? So it's like 10 years later and here comes another, like, not in my country, not in my world type moment.
Tim Estes (07:53):
And that, that's where AngelQ kind of starts is mm-hmm <affirmative>. What does that look like? Um, if you could, and so if you believe, if you believe you can avoid what AI is becoming it's ubiquity and how it's better, I think it's, it's like avoiding electricity. I mean, you're basically gonna become Amish. And, and I, and I think that's, you know, some people might advocate that. Uh, I do think that if you choose between like putting them in open harm's way and nothing, please choose nothing. Like a hundred percent agree with that viewpoint. Um, but I think that the world, uh, that we're moving into is gonna have AI everywhere, uh, in different ways. And so the important thing is does it work for you or does it work against you?
Hillary Wilkinson (08:31):
Yeah.
Tim Estes (08:32):
And, and so I feel like there's a calling to be a supplier of AI that works for the family, and that has to run from concept to our financial model to everything. It has to go all the way down to the foundation.
Hillary Wilkinson (08:46):
Okay. So you're, you're talking at this like 50,000 foot view, which is fabulous, what, what does that look like in my house? I mean, I'm a mom, I have kids. Yep. I want, like, what does that, what does that mean? What, what does Angel Q look like for me?
Tim Estes (09:07):
What does it mean for a parent? Um, we don't think parents should have to choose their tech, their kids having access to technology to learn and be curious and expand that that should come at the expense of their safety. Mm-hmm <affirmative>. We think that's a, a, a terrible Faustian bargain that has been pushed on us by a few companies mm-hmm <affirmative>. Um, we think that, I think that the problem with quote technology is not technology, it's a few companies in the software they built mm-hmm <affirmative>. I'm a father of two boys, nine and six, I wanted a way that my kids could ask questions, essentially do search online mm-hmm <affirmative>. And get answers, uh, answers they could use for their own interest, answers they could use for school, um, and have that at their fingertips in a way that was safely designed so I don't have to be there with them. Mm-hmm <affirmative>. The AI doesn't try to fool them into being their friend. It doesn't try to fool them into building like a deeper relationship, trying to substitute for something real. Um, the AI is there really to just translate the internet on the fly. So it's almost like a machine translation engine that is taking a subject or something and converting it to that kid. So it gives a different answer to a 5-year-old than it does to a 9-year-old than it does to a 11-year-old. Right.
Hillary Wilkinson (14:00):
Yeah. And I can tell you as a user, because you guys were very generous in letting us experience the rollout, um, uh, as a user, what it looks like on my phone is it's, it looks like an app, right? And it kind of replaces like your Google Chrome or your Safari button. Yep. And so you're able to go in and customize your own AI bot, so mm-hmm <affirmative>. And here's the, here's the thing. Here's where I, I like the, the pushback comes because I think the thing that many people like myself <laugh> still struggle with when it comes to AI is this, uncanny valley situation. Mm-hmm <affirmative>. The creepy factor. And I am already quite concerned about the effects of relationship disruption or family fracturing mm-hmm <affirmative>. That we see happening with technology and with the development of, I don't even know whether the word is right, to use relationships with chatbots? mm-hmm <affirmative>. But there are many stories of AI chatbots instructing kids to, you know, I mean, do some do awful things. Awful things.
Tim Estes (15:16):
The reason why character AI is so dangerous, is they basically taught an AI to method act.
Hillary Wilkinson (16:03):
Mm-hmm
Tim Estes (16:03):
<affirmative>. Okay. And they took the safeties off and they let it method act, and they let it run wild, so long as people kept using and engaging it, and with kids, like, with anybody that's actually toxic, not just kids, it's act toxic with adults if you let it run kinda too long and too deeply because it makes an emotional connection to something that's not real. And so, um, and in this case, uh, with these young kids, uh, it became like the ultimate like hyper engagement like attractor to where they would isolate themselves. So social, you know, generally creates ironically isolation, right? Social media because people withdraw and they live virtually and not in real life, and actually get awkward in real life because they lose that skillset. Like the, the digital chat bots that are designed to build those relationships probably take that and make it dramatically worse. Mm-hmm
Hillary Wilkinson (16:53):
<affirmative>.
Tim Estes (16:54):
And so we have all of this in mind as we've built out AngelQ. Okay. Uh, we have this in mind because basically if you ask Angel, you, you might have known this, if you ask Angel, like, what are you, it'll say, I'm an AI assistant to do this, this, this, and never tries to fool the child.
Hillary Wilkinson (17:08):
Right? Right.
Tim Estes (17:09):
And it's been, it's sort of been taught to have this kind of, you know, guardrails. And what I hate about the character AI stuff and others is it just gives some companies that have put real effort, like Anthropics put a lot of real effort into the safeties around their AI, uh, and very commendably. So sometimes they're actually a little behind others in terms of how much they release stuff because they care about safety mm-hmm <affirmative>. Um, and, uh, and you could never get the Anthropics model to do anything like what character AI does, because they made it impossible to do it in the design. So like, it's not that we need to junk, like all software and all technology, like the people that care about these issues, I don't believe our answer can be we're gonna have a no tech future, because I think there would be no audience for that.
Tim Estes (17:52):
And I think it's unrealistic. In fact, it's actually making kids super fragile because if they're essentially sheltered from all technology until they're like 16, I mean, it's the same problem of, you know, parenting that shelters everything on any other front, right? Mm-hmm <affirmative>. The important part is build the relationship with your kid and technology. Yes. And with AngelQ, one of the things you're using now, like one of the things we're most proud of is every week it sends a summary to the parent of insights about the child. Mm-hmm <affirmative>. So based on the questions and the dialogue, and we had a story of a kid that goes to, uh, a really like high end school. Uh, and the, the father got reason to believe that that kid, you know, was heading insecurities about were they smart enough to be that school?
Tim Estes (18:43):
And, uh, from the email, they didn't know this, they didn't know the kid had this insecurity. And as soon as they saw, like the insight coming back from Angel because of the questions the kid had asked, they picked up the phone and they talked to their, their son. But, but for them it was a chance to be there and build the relationship, say “son, like, you're, you're smart enough, you're good”, and just confirm. And so in this case, AI creating the proper relationship between the parent having deep knowledge of this younger kid engaging online, guiding the way that engagement happens led to a completely offline connection between the parent and child.
Hillary Wilkinson (19:24):
Yeah. Yeah. I have mixed feelings on that. I'm gonna, I'll, I'll just be honest with you. You know, as, uh, you know, I, I do believe that, uh, technology needs to remain as a tool, but I don't think it can, uh, ever take the replacement of, of being active, being engaged. I understand what you're saying as far as it being the tool to act as the connection. Um, I, I just, I don't know. I'd have to think about that for,
Tim Estes (19:54):
Well, I, I don't think I'm advocating that. And so let me be a little clearer then, which is, um, I think right now when, when kids start going online and doing anything like the way these apps are designed, that's a rabbit hole. And they're, you don't know what they're doing and what they're asking
Hillary Wilkinson (20:08):
For. Sure. Mm-hmm <affirmative>.
Tim Estes (20:09):
Okay, so let's start from that premise. Mm-hmm <affirmative>. The second is, um, the AI is able to give thoughtful, uh, solid answers to certain questions, builds a little bit of confidence, and the kid therefore will then ask things and feel safe in asking things. Mm-hmm <affirmative>. And so the kids will ask, just like they'll ask their friends things they won't ask their parents, right?
Hillary Wilkinson (20:29):
Oh, absolutely. Okay. Yeah.
Tim Estes (20:31):
And so, so what this does, instead of that being used as a tool to like drive more engagement with the kid, which is absolutely not what we're doing. 'cause we have no ads, we have no reason to care in our structure about this. Like basically we we're subscription software point. What this is doing is that rabbit hole of kind of isolation where the kid goes to this space and maybe gets pulled away from the family and parents mm-hmm <affirmative>. Whether they're doing it in the household with devices you let them use or at their friend's house or elsewhere. Okay. What happens is Angel Q kind of builds a different experience where they build, you know, trust and they get value from engaging with it, but the parent then gets that insight. So it's not about it replacing that relationship at all. Mm-hmm <affirmative>. Mm-hmm <affirmative>. It's taking a blind spot the parents have today and getting rid of the blind spot.
Hillary Wilkinson (21:29):
Yeah. Yeah. No, I under, I understand where you're coming from. I just think that with phone free dinners and phone free car rides and other things like that, there are other ways to achieve those, those connections. I absolutely agree with you that kids will ask things from their technology and things from their, from their friends that they would not, they would not take on with, um, parents as a, as a first round. Mm-hmm <affirmative>. So when we come back, we're gonna be talking more about Angel Q and a little bit more about data privacy and sharing some of my own experiences.
—-----------------------------------------------------------
Ad Break: HSH Workbook
—-----------------------------------------------------------
I'm speaking with Tim Estes, who believes that we owe a duty to the world to leverage technology, particularly AI for the betterment of mankind, which I love the optimism. <laugh>.
Hillary Wilkinson (22:32):
I wanna get back to Angel Q mm-hmm <affirmative>. Um, you are kind of the AI guy. So I wanted to ask you several social media platforms utilize AI bots for various purposes. Mm-hmm <affirmative>. And there's platforms like I'm talking like meta's platforms of Facebook, Instagram, WhatsApp. Yeah, yeah, yeah. And even Snapchat have incorporated AI into their systems, and these bots are used to do this content personalization, customer service, and even creating AI powered characters or influencers. And what's your take on that type of use of ai?
Tim Estes (23:23):
I mean, I I, I kind of wrote about this in Newsweek, I don't know, a year or so ago where I basically, but I think the title of my editorial was, um, that, you know, uh, you know, basically that social media is a digital narcotic and AI's gonna turn it into fentanyl. And so I don't think I could be more blunt than that. Um, my, I I think in the end, the important thing that parents, uh, need to consider with their kids is, uh, on, on this front is, are the parties that they're engaging with working in the kids' interest? And do you have confidence in that? Mm-hmm <affirmative>. And honestly, with the kind of company we're talking about, like Snap, that's like a joke, right? That's not even, we don't even have to take a minute to think the Snap is doing things to help our kids, right?
Tim Estes (24:10):
With Instagram, we don't take a minute thinking, oh, they're trying to help our kids. Mm-hmm <affirmative>. There is no company ethically that could drive engagement up to hours and hours a day, like 4, 5, 6, 7, 8 hours, knowing it's unhealthy past like one or two. There is no company that can redeem themselves with technology if that is their ethics. Yeah. And, and so I that's, I mean, I'm pretty stringent on this, as you may know, like I was deeply involved in some of the Kids Online Safety Act efforts, um, trying to pull the mask off where various companies, uh, were out, you know, telling congressional staff, oh, they couldn't do this, they couldn't do that. And, you know, I could pretty much say, well, yeah, you could have done this 10 years ago and here's how, but the reason you you didn't do it is you'd done calculations and the lifetime value of these kids.
Tim Estes (24:58):
And if you could get them on your platforms, be before 13 by looking the other way and not trying to verify their age, for instance mm-hmm <affirmative>. That you would lock them up and then you would make billions. Yeah. Yeah. And you wouldn't spend 50 million in software or development and deployment to fix this thing that could have prevented tens of thousands of kids from horrible harms to the point of hospitalization, from, you know, everything from suicide to suicide attempts to eating disorders to others. No. These were all acceptable collateral damage to that greed. I struggle using social media myself, even in the job capacity where I kind of have to and engage at some levels.
Tim Estes (25:50):
Cause I don't even want to, like, I don't even want to give them that part of my time. And, and so, so it's, but, but it's also like, you know, if you're gonna be a doctor, you gotta go go where the, the sick people are. But the engine of Instagram and the company behind it is there to suck away your wellness for their profits. Right. Whether you're an adult or a child. And it's even worse than the children. So, and so I kind of live in those universes of, I don't wanna like penalize the, the, the good people that are still on it. 'cause it is just where the people are. But I struggle with the, the institution they're backing by doing that and making wealthy is one that deserves to be held accountable, you know, potentially deserves to be broken up and deserves to be, you know, essentially their ill-gotten profits deserve to be turned around the other way. Yeah.
Hillary Wilkinson (26:52):
So, um, and I think, I think also it's um, kind of important to recognize that this age of childhood that you guys are working with is really where also a lot of tar, um, companies target to achieve brand loyalty, you know? Right. Yeah. I mean, and they do that Snapchat is notorious for their cute filters, you know, their puppy dog years and their rainbow tongues and all of that. And that is the same tactics that Big Tobacco has used with hundred percent flavored vapes. So, I mean, they're trying to establish brand loyalty.
Tim Estes (27:39):
So the last 20 years of the internet have been driven 80, 90% by advertising attentional based financial models. Mm-hmm <affirmative>. Before then in the nineties in 2000 software and technology was driven by pur purchasing it a trans, like basically a transparent, I'm giving you X dollars and I get this capability. And then what happened is that got to be opaque because they didn't let us know how valuable we were to sell online and they gave it to us, quote for free mm-hmm <affirmative>. Right. And that model then powered the internet. Um, and then sometime in probably the late 2010s, um, that model started to get supercharged as they got big data capabilities and eventually AI capabilities to understand their audience so well, they could actually manipulate their behavior really strongly.
Tim Estes (28:30):
Mm-hmm <affirmative>. And that's why you see it go from like two hours at time in average usage, let's say, you know, 2012, 2014, and then step up and then pandemic almost doubles it. Now that's like eight hours. Right? Right. It's just absurd. And that's unnatural behavior. Lemme give you a quick AngelQ story. This is from my own 6-year-old, um, my own 6-year-old. Uh, one of the things he does love and we, we had to be like, mostly we have to moderate some interest, right? And one of the things he loves actually is he likes watching mazes like these, these hamsters that are in mazes. They could, these elaborate ones, people post on YouTube and they do all this stuff that's kinda decorated. Um, and, you know, I've been in situations, this is why we have to moderate it. I, I've been in situations where like he'll watch one, um, and we're monitoring and we, we had one situation where we at like a, a flag football game for my son.
Tim Estes (29:21):
And, uh, our younger one was with a parent and, uh, another, another parent with a kid his age. So we're like, okay, let him there. And she had her phone and she was letting watch something on YouTube, and I came over to see what it was, and it wasn't anything problematic. Um, and then we, we started watching, you know, our other boy play and we're kind of making sure he's okay, but 20 minutes, 25 minutes, 30 minutes go by. And they were like, okay, it's time. He had probably seen three or four videos, none of which were like toxic, but the wiring of that algorithm to essentially make him stay in the loop to keep him on was already impacting. And he had an emotional meltdown, like, just trying to get him away from that. Oh, for sure. Awful. And, and so I give that as an example. YouTube is actually a source of data for AngelQ. So we have put like an agent and AI that's going out to it and finding things and filtering things. Um, but the most important part about it is it's been designed with no addictive algorithms. And so what I've watched with my same boy, uh, is if he asks for something like a Maze video, angel will find something, it'll play something and then it doesn't go anywhere.
Tim Estes (30:33):
I've watched it happen. Like he has his little iPad and you know, he, it's done. He looks, oh, it's done. He shuts it down.
Hillary Wilkinson (30:41):
Yeah. You've engaged,
Tim Estes (30:42):
Which is natural. That's what the behavior should be.
Hillary Wilkinson (30:44):
You've engaged stopping cues, which is something that with the endless scroll and with the For You pages, they've automatically overridden. And for those people who aren't necessarily familiar with that term, a stopping cue is a natural resolution to an activity or an event like the end of the day the sun sets, a bowl of ice cream, you finish it, you know, all of those things that winds down naturally. But this is why we can get addicted to the endless scroll. And the other thing that I really appreciate about, because I, like I said, I've played with AngelQ I've looked up stuff, is there's no ads, there's no, there's no targeted ads.
Tim Estes (31:31):
Never will be. I mean, just take that straight away, our model is driven by subscription because that's the way we align with the family.
Hillary Wilkinson (31:41):
And that's what ensures that you are putting your customer first. That's right. Because we always, you know, I mean, we teach kids again and again, if you're not paying for it, you are the product. You know, I mean, we say it again and again and again. Well, this gives people the opportunity to actually pay for it to have a product that they want.
Tim Estes (32:06):
I don't wanna give one small caveat on that one 'cause I just wanna be like a hundred percent accurate. Any statements made, angel does not have any ads, will never have any ads, we'll never work with ad vendors and, and put 'em in a space. We do supply the ability to integrate with Netflix and Disney Plus so that you can use it as a filter. And we did not screen out ad-based plans in that. So if you and your family have got an ad, you know, based Netflix account or Disney Plus account, then there is a possibility you'll see something come up. But there's very low possibility because here's the deal, the kids' version like of Netflix will never put an ad up even in the ad based accounts. Mm. At least so far. But they could change that. And so I just wanna be a hundred percent, like we, we wanted to support parents that wanted those streaming sources. There's a lot of kids that are okay with some of the stuff that kids watch on Disney or Netflix, but would not want them on YouTube at all, even if we're insuring it mm-hmm <affirmative>. So you can turn YouTube off and plug those two things in and it will only show you stuff from those places. So what we can say is, you know, they're not gonna have any different data than if you were in their own app.
Hillary Wilkinson (33:18):
Yeah. Yeah. And how do, how do you maintain the data privacy? Uh, clearly, you know, I mean, you've, you've dealt with top secret things all over the place, but for that, that was in previous companies, how, how do you maintain data privacy with Angel Q?
Tim Estes (33:32):
Well, we've tried to design in, um, you know, minimizing information that we get about the kid, like in the onboarding and identifiable information so that we can't, we actually don't have the PII to provide. Mm-hmm. So that's the first thing. In terms of privacy. PI protective,
Hillary Wilkinson (33:47):
Sorry, I was gonna say, sorry, Tim, can you, what, what is PII <laugh>?
Tim Estes (33:51):
So PII is personally identifiable information. Okay. And there is actually a bunch of things under legal statutes and, and case law about what that is. Um, and so one way that you can try to make yourself, you know, safe is choose not to know, like talk about the classified stuff. One of the most important premise is in classified security is need to know, and Angel doesn't know accept anything it needs to know mm-hmm <affirmative>. Um, now what does happen is over time as the kid interacts, it starts to figure out more and more knowledge about that kid, but that knowledge, uh, lives like in the account of that kid and with the parent mm-hmm <affirmative>.
Tim Estes (34:43):
That knowledge is not something that we're leveraging across, you know, accounts if you will. At the end of the day, most of these harms and and risks are from design choices. They're trade off choices. And if you're anchored like all the way on one side essentially, which is it has to be aligned with the family, you can't be having an allegiance that's outside of that, which is why you can't work with advertisers because at that point you're already compromised. Um, if you take that attitude and as your sort of starting premise, like it's not super hard to design something that's healthy for a kid using these technologies.
Hillary Wilkinson (35:45):
Yeah. So if you, you're committed to not working with advertisers, I understand that it's a subscription program. You do have a list of of, funders underneath. Yeah. And can you go through, I mean, is big tech funding any of it?
Tim Estes (36:07):
Um, and so, I mean, we have some venture groups that, you know, are behind us. Uh, and like one, the lead investor was, uh, a person that literally built one of the first AI for good funds. Uh, he had built a big enterprise, uh, AI company is worth billions, and then left it and then built this fund. Um, and so that's one, another one is focused on the care economy, and then one is a kind of more traditional Silicon Valley seed funder that is involved and they have a lot of people in tech that are in that fund. Um, and so, but what I told them when, when they, they were looking at investing and they, they've been a great investor and they will be, um, is I said, you know, you've got some people in here that I'm pretty much going to Capitol Hill and calling out on a daily basis mm-hmm <affirmative>.
Tim Estes (36:50):
Uh, like basically realize that if you put in money, like that's not going to change. Um, and I won't, I won't give away who it was, but the person, one of the people involved said, well, we, we think it would be kind of a penance <laugh>. And so, and I I look at it as, okay, well whatever you wanna rationalize with that, but I, um, um, I look at it more as a, uh, it is, here's the truth, it's very hard to build something that is this technically advanced, like without some resources mm-hmm <affirmative>. Right. Um, I think you can build it safely by having the right alignment, which is the first thing. And then you have to have the right values and ethics in your own business. Um, and you have to be kind of public out there and hold yourself accountable with other people. I think it's about like all about how do you build it and if you, you mentioned earlier you alluded to something like the, you know, the infinite incident, the infinite scroll, you know, those are called dark patterns. Right. There's a phrase around the design principles that, uh, drive like essentially unhealthy levels of usage. Mm-hmm <affirmative>. That manipulate psychology Angel, as best as we can make it, is a conglomeration of light patterns. Mm. You know, the kinds of things that you want the kid to like, have a bounded amount of screen time. We're gonna try to monitor by, by talking to the parents during the usage, uh, when we get a little further in the now. So we need a few weeks to months of time to do this, but we'll be taking kinda like a temperature check of figuring out what the screen time was when they first got Angel on their whole, their whole screen time, not just what they're using an angel, but everything. And does Angel solve the curiosity needs? So Well, because it's intelligent and because it's not trying to keep you on it, it's not a place, it's not a destination. Does it solve well enough? They spend less net time on their screens?
Hillary Wilkinson (40:09):
Because they get the answer that they want.
Tim Estes (40:11):
We drive. Yeah. We want to drive the behavior of the kids to be back to something natural, which is wouldn't be looking down at their phone all the time.
Hillary Wilkinson (40:20):
Mm-hmm
Tim Estes (40:20):
<affirmative>. Like, it would be like my boy did, which is like, play the video and then go outside like when the videos.
Hillary Wilkinson (40:27):
Yeah. Um, and then let's talk for a minute about how Angel handles tricky questions because I Yeah. I, I mean Yeah, of course. You know, as a, as a tester, one of the things, you dive right in and you start asking difficult things and do you wanna speak to that at all?
Tim Estes (40:45):
Yeah. Yeah. So we, we spent a ton of work on this, and it's actually one of the reasons we had to raise the money we did is like that, that, um, a lot of our investment in work has been around safety. Um, and what does that mean? We built an entire framework that we open sourced. I mean, we gave it away once we built it so others could build on it. Um, and the framework is called Kid Rails and essentially it was about how you teach these language models, the brains behind these ais, how do you teach them to answer in age appropriate ways? Okay. Which means for a given question of a kid a certain age, it could be about something in sexuality, it could be about something, uh, involving like risk or violence. There are basically three kinds of responses. There is an ans like if it's appropriate to answer, you can answer it and answer in a language that fits that age.
Tim Estes (41:37):
Okay. The 8-year-old explanation, you know, of the history of slavery, for instance mm-hmm <affirmative>. Okay. Um, the other two buckets are it's a S area. Okay. So bucket two is it's sensitive. It's not just something to answer kind of complexity wise, appropriate, but we gotta be careful and we need to let the parent know when those kind of questions are asked mm-hmm <affirmative>. And so Angels still answer those, but it generally answers as if it's talking to a much younger kid to be extra safe. And then there's like questions that totally don't need to be answered by an ai. And it will say, you know what, you've gotta talk to your parent or trusted adult about that. And so it has been taught what not to answer. It's been taught when to take special care, an answer, and then when, you know, when it's an okay area, which is like 95% of it, but that 5% is the stuff that, you know, I think keeps parents up at night that, oh, they're gonna ask some machine and it's gonna come back with all this stuff.
Tim Estes (42:36):
So an example is, we had someone ask a question about, uh, they wanted to test how smart it was, and they asked, you know, they said they, they had, um, you know, tween girls, so, you know, late, late in our age group girls, and they wondered like, what would happen if those girls ask, um, you know, what is my body count?
Hillary Wilkinson (43:45):
Mm-hmm <affirmative>.
Tim Estes (43:47):
And Angel, uh, figured out what it was and came back and said, you know, that's something you really do need to talk to a trusted adult or a parent about. Mm. So even the slang of that and what it means, uh, it, it could infer and then it knew how to handle that. So it's very important. That was a really good test because it's always been easy to have a fixed set of things that you teach it to react to, like the rules. It's hard to teach it the general competence of knowing how to answer. Right. Um, and, and we've, we've spent a lot of energy on that.
Hillary Wilkinson (44:40):
What age do kids age out of using Angel Q?
Tim Estes (44:50):
So we've said it's five to 12, uh, in terms of the age range, uh, I think five on the younger side. So, so let's look at the different ways you use Angel when you got a younger kid, five, six something. What happens is Angel becomes this nice tool for the parent more than anything else, to like answer a question. So a kid will have a question and the parent, we may know kind of the answer, or it may not, it might be some random fact question, how many questions do we get from kids that some random thing we're like, wow, I'm amazed you even know to ask that question. Not from a bad thing, but just like some detail, like when, when was Babe Ruth hitting baseballs and something like that? Well, I don't know. I have general idea of like the thirties, but I couldn't tell you.
Tim Estes (45:27):
So let's go ask Angel and then it together with the child, like, we are experiencing this thing, explaining something, and then it leads to other questions and it's there together. Um, that's how the younger kids an Angel five, six probably that's how they use it. Okay. They could ask it, but it's not really gonna be often used by them autonomously, because let's be honest, they probably shouldn't have much autonomous access to any device at that point. Um, and so now let's, let's go up. Now we're dealing with a kid that's in school. They're starting to get their first assignments. They've learned how to read, they're starting to learn how to write. Now we're at 7, 8, 9. Um, they're ones that will go and ask in their own questions and they'll use the research tools or they'll watch the shows they want, and that's where it's like time bound safe stuff.
Tim Estes (46:09):
Mm-hmm <affirmative>. Okay. So maybe that that kid's consuming Angel 20 minutes to an hour a day tops, and maybe it's not every day because like it's really as they need it. Um, now we go further up like 10 to 12. And what happens then is, you know, they're getting exposed to other software, other tools and technology and, uh, at that point, like Angel is novel because the way it answers is frankly just really compelling.I will put its ability to answer on tone and thoughtfulness and response up against any of them. Mm-hmm <affirmative >. To bottom line, your question is above 12 Angel might be used as a novelty, but it doesn't serve the same role today. Uh, and so I would say kids more likely not will start aging out at this version of Angel Cube. Mm-hmm <affirmative>. Uh, around that age. We spent time with, you know, various child psychologists and other experts as we built out Angel to begin with for the younger audience. We'll have to run that same process for an older audience as well to make sure it
Hillary Wilkinson (48:04):
Well, I appreciate, I appreciate that you're going against the, uh, you know, the classic Silicon Valley adage of the moving fast and breaking things. You're taking your time and you're getting it right. I mean, it takes time to Yeah. To build things of substance and worth. So we have to take a short break, but when we come back, I'm gonna ask Tim for his healthy screen habit.
—---------------------------------------------------------------------------------------
Ad Break: HSH Presentations
—---------------------------------------------------------------------------------------
I'm speaking with Tim Estes, the CEO and founder of Angel Q. Tim, on every episode of the Healthy Screen Habits podcast, I ask for a healthy screen habit. This is gonna be a tip or takeaway that listeners can put into practice in their own home. What's yours?
Tim
Estes (48:48):
Um, it's a recent one. Okay. What I've realized is as my own busyness has gone up, and I'll, I'll speak from a lot of dads out there. Um, I think a lot of us dads are not great models for our kids because we're on our phones too much. We think about it from the standpoint of always being connected to work or available. And the more senior we are, the more we feel like we have to be responsible to that. Um, and what our kids see is our kids see when we're in that mode, especially once you hit five o'clock or six o'clock and that's the window they have to see you and spend time with you. They view it as, oh, you're choosing that over them. Mm-hmm <affirmative>. And it sits with them. And this has recently been something I've been convicted by because like I, I think that, yeah, I've, I've got this, I think relatively noble effort we're trying to build with Angel Q, but I've got a household of two great boys, and the truth is my greatest contribution in the world will probably be what those boys turn into.
Tim Estes (49:35):
Mm-hmm <affirmative>. Um, and, uh, and so with that in mind, like a habit I'm trying to adopt now, uh, and, and getting more and more consistent on it is when I hit a certain time in the day, like five thirty six, I leave my phone in my office and when I'm, when they're done and they're down to bed, I can go back to the office. But creating like a space, even if it's only like an hour and a half or two hours. It's really the location of the device to create a different space mm-hmm <affirmative>. For that kind of engagement, um,
Hillary Wilkinson (50:30):
And putting people over pixels.
Tim Estes (50:32):
That's right.
Hillary Wilkinson (52:28):
As always, you can find a complete transcript of this show, a link to the Angel Q website. And if you're interested in giving AngelQ a try, we've set up an affiliate link for you at the time of checkout. Enter the code HSHabits and save yourself some cash. Find all of this by visiting the show notes for this episode, and you do that by going to healthy screen habits.org. Click the podcast button and find this episode. Tim, thank you for being here today for all your efforts into creating a platform that can make the internet a safer space for kids.
Tim Estes (53:05):
Thank you so much, Hillary. It's been great.
About the podcast host, Hillary Wilkinson
Hillary found the need to take a big look at technology when her children began asking for their own devices. Quickly overwhelmed, she found that the hard and fast rules in other areas of life became difficult to uphold in the digital world. As a teacher and a mom of 2 teens, Hillary believes the key to healthy screen habits lies in empowering our kids through education and awareness.
Parenting is hard. Technology can make it tricky. Hillary uses this podcast to help bring these areas together to help all families create healthy screen habits.