Pros & Conversations

Episode 21: The Science of Entrepreneurship: Inside Winter Light Labs' Groundbreaking Technology

Peter G. Reynolds Season 3 Episode 21

In this episode of Pros and Conversations, host Peter Reynolds and co-host Damon Adachi dive into the fascinating world of science-based entrepreneurship with the co-founders of Winter Light Labs, Maria Yancheva and Jordan Ponn.

Discover how their innovative technology analyzes speech to detect and track cognitive health issues like Alzheimer's and Parkinson's, and its potential to transform the lives of millions of people around the world.

Learn about specific challenges facing science-based start-up, including how marketing plays a critical role in humanizing technology and building trust with investors and the public.

Learn more at: https://winterlightlabs.com/

Support the show

Thank you for listening! You can support and help us create great content for entrepreneurs and small business owners by clicking here: https://www.buzzsprout.com/1985155/support

Subscribe on your favourite podcast app and don’t miss an episode!

We’re also on Youtube: https://www.youtube.com/@prosandconversations?sub_confirmation=1

Follow on Facebook: https://www.facebook.com/fortherecordproductions/

Follow on Instagram: https://www.instagram.com/fortherecordproductions/

Episode 21  - The Science of Entrepreneurship: Inside Winter Light Labs' Groundbreaking Technology

Peter Reynolds: 00:04.466 - 00:45.716

Hi, I'm Peter Reynolds, and welcome to Pros and Conversations, the podcast that explores what it takes to be successful, whether you're from the world of business, science, or the arts. On this episode, we'll be exploring the science side of entrepreneurship. Winter Light Labs is transforming cognitive health by developing tools that analyze speech to detect and track diseases like Alzheimer's and Parkinson's. It's technology that has the potential to improve the lives of millions of people around the world. And someone who improves my life just by sitting in the chair beside me is co-host and marketing consultant, Damon Adachi. Good to see you again, Damon.

Damon Adachi: 00:46.903 - 00:57.507

Great to be back. Yeah. Ready to start another season. This'll be interesting. I can always talk business and I can be artsy. I might be swimming upstream a bit in the science category, but we'll, we'll figure it out.

Peter Reynolds: 00:58.107 - 01:16.074


Yeah. I thought this would be a great way to start off our third season to sort of explore the science side of entrepreneurship. And while in previous episodes, we focused on the nuts and bolts of building businesses across various industries, science-based entrepreneurship comes with its unique challenges, doesn't it?

Damon Adachi: 01:17.583 - 01:30.906

I would think, I would think there's a lot that kind of go through my mind on the different aspects of it, but I think we will speak to the experts and get their, their input and make sure that we're being honest about it. And, uh, yeah, I'm excited to talk it down.

Peter Reynolds: 01:31.206 - 01:48.010

Perfect. Perfect. Well, that's a great segue to introduce our guests from Winterlight Labs. Uh, we have Maria Yancheva, who is co-founder and CTO and senior software engineer, Jordan Ponn Maria, Jordan, welcome to Pros & Conversations.

Maria Yancheva: 01:48.030 - 01:49.592

Thank you. Thanks for having us.

Peter Reynolds: 01:50.454 - 02:00.269

Maria, I'm wondering if you could take us back to the beginning. How did Winter Light Labs get started and what motivated you to focus on using speech to monitor cognitive health?

Maria Yancheva: 02:02.574 - 04:17.763

Yeah, so the idea for this kind of technology actually comes out of the research lab at the University of Toronto, where I did my master's degree. I was working with a group of very smart folks, with Professor Frank Rusich, Katie Fraser, and Liam Kaufman, my co-founders. We were at the Computational Linguistics Lab in the Department of Computer Science. And the lab was interested in looking at the application of machine learning models to healthcare problems, which can have a meaningful, positive impact on patients in the real world. And it was actually this application to kind of the real world that Professor Ruzic pitched me on when I was deciding whether to go back to grad school or stay in industry. And I really found the projects that he was working on to be really motivating. So his lab had already been working for many years to understand the way that we can use speech to detect symptoms of various diseases in a more objective way and to develop applications that can be helpful, for example, in senior care home settings. to monitor the cognitive health of folks in retirement homes that can, for example, allow us to develop more personalized treatments and activities that better meet their needs. And then on the other hand, he was also looking at assistive technologies that can help older adults live independently for longer at home. And one example of that that was I think in the news at the time was this little cute robot that he developed that could communicate with older adults and identify when they're getting confused and help them with various tasks kind of around the home. For me personally, the idea of being able to build more accurate measurement tools for diseases like Alzheimer's where early detection and the ability to pick up on subtle changes is really important to enable researchers to develop better treatments. It was really motivating for me. One of my grandmothers had Alzheimer's, and so the idea of being able to contribute to the development of better therapies and treatments, albeit not being from the medical science background, but with my computer science background was really meaningful. And then also on a personal note, I really like languages and math. So this intersection of speech, computer science and healthcare was really perfect for me.

Peter Reynolds: 04:18.974 - 04:32.587


It seems like the perfect combination of skills to tackle this problem. And it's fascinating that the idea that speech can reveal so much about our cognitive health.

Maria Yancheva: 04:35.632 - 05:36.466

Yeah, for sure. Cognitive health has obviously many different facets. And so there's memory. People often think about memory as one of the first kind of signs of Alzheimer's disease. But certainly there's many different aspects. There's linguistic ability, executive function and others that all contribute towards the overall cognitive status of someone. for example, someone with Alzheimer's. And so what we were really focused on was adding more objectivity and sensitivity to that linguistic category. But you can also think of linguistics as a lens of looking at the other types of impairments. So, for example, if somebody is having memory problems, they may not be able to remember the most precise word for something or the most kind of specific word. And so they might substitute, or maybe they can't remember someone's name and they will substitute with a pronoun or something like that. So yeah, we were quantifying language for the sake of quantifying linguistic ability, but also as a lens for other types of impairment.

Damon Adachi: 05:37.867 - 05:47.169


So speaking of specific meanings for words, I'm very curious about the name of Winter Light Labs and where that comes from and what it means to you from a brand point of view, because really that's all I know about.

Maria Yancheva: 05:48.962 - 06:31.783

Oh my gosh, this is a hard question. The hardest thing I think was finding a name. It's one of those things that is so subjective and it's really hard to come up with a name that's easy for folks to remember and meaningful. But this is actually a word that my co-founder Katie Fraser found. It was kind of in the in a glossary of lighthouse vocabulary. And it basically, it's a type of light that is used to signal danger ahead. So we were thinking of it as kind of symbolically what we're trying to do is develop technology that can detect early signs of Alzheimer's. So we like that kind of metaphor.

Damon Adachi: 06:32.435 - 07:01.125

Well, that's brilliant. And for me, it really resonates because from an outsider's point of view, I understand that you're so technology based and, and intelligence based and science based, but it also feels like you're in the business of hope. And that's really what I root to when I hear that is that concept of light at the end of the tunnel or whatever, you know, whatever it means to you. But, um, I think that's from a brand point of view, I love it. I love the way that it signals that you're going to do something in the future that will help people and, and deal with the disease.

Maria Yancheva: 07:02.458 - 07:42.234


Yeah, for sure. And in our way, because yeah, one half of it is developing, you know, better treatments, but then the other half is being able to tell if a treatment is effective or being able to detect those early signs of those types of neurodegenerative diseases, especially for Alzheimer's. It's something that can go on for many years before it starts to show clinical symptoms. And I think part of the reason why it's been so hard to study the etiology of the disease is that we don't have longitudinal data over the course of someone's life or many years before they start to show those more significant symptoms. And so it is, I think, really important from a research perspective and allowing folks to develop better treatments in the future.

Peter Reynolds: 07:43.555 - 07:49.890

So Jordan, let's get into some of the weeds here and talk about how the technology actually works.

Jordan Ponn: 07:51.556 - 07:53.237


Is there anything specifically you'd like to know?

Peter Reynolds: 07:53.537 - 08:16.764

I know I asked you a very broad question, didn't I? Well, I know Maria touched on this idea of it sort of analyzing speech and looking for, you know, changes in vocabulary, you know, just talk, can you talk a little bit more about exactly how, what it's analyzing and, and how it's able to make those kinds of diagnoses? Yeah, for sure.

Jordan Ponn: 08:17.665 - 09:34.323


So in our system, there are essentially four main major categories that we're simultaneously assessing. And then we're looking at different domains of speech and analysis. So the first one would be the acoustics. So basically it's looking at, you know, the tone of your voice, the pitch that you're speaking at, how long you're pausing, things of that sort. Basically, like what does it sound like when you're talking? The second one will be the lexicon. So basically, what are the kind of words that you're choosing to use when you're describing things? So for example, are you going to use more pronouns? Are you going to use something more vague like it or they? Or say, if you're describing a vehicle, are you going to say it's a car or are you going to say it's a call it a sedan? You know, how specific are you going to be with that? The third category would be syntactic, so the types of sentences you're saying. So are you using very short, simple sentences? Or are you using something that's very long, drawn out, lots of adjectives, so on and so forth. And then the fourth category would be the content. So what are you saying? So are you repeating yourself a lot? Are you speaking in more abstract terms? Or are you talking about something a lot more specific than that? And then from there we can basically drill down into any one of those categories.

Peter Reynolds: 09:36.324 - 10:52.906


It's absolutely fascinating. And the fact that AI is, is obviously so much in the news now and maybe not looked upon in the best light. It's wonderful just to see what it can be used for, you know, to really, to really change lives. And it is, it was interesting, Maria was talking about this idea of, you know, What was the phrase you used Maria, that it's a more sort of dispassionate look at language and we're not sort of making judgment calls. It's looking purely at the words being used and how they're being said and making a judgment, which is so interesting because I think as family members, you can fill in the blanks for your family members and not realize, I think it probably can go both ways. Either you know them so well that you notice the problems when others don't, but also if the decline has happened so gradually, you as a family member don't notice it, but that the technology doesn't have that connection. It's able to sort of make a dispassionate judgment.

Maria Yancheva: 10:54.488 - 13:27.707


Yeah, for sure. We view it as kind of a more objective way of quantifying those symptoms. I think one challenge with more standardized types of assessment that are traditionally used, and they have their own value and their own benefits, but one of the challenges is that variability when you have, you know, a human who is talking to the patient and trying to assess them and trying to go through a number of different kind of questions and then to score the answers. There is that tendency, as you're saying, sometimes we may not strictly follow the protocol exactly as it's supposed to be. Like maybe if they couldn't understand the question, maybe we'll repeat it one more time because we want to make them feel a little bit less stressed out. which might change the outcome of the overall assessment. Or when we are then scoring the assessment, a lot of these kind of standard cognitive assessments for Alzheimer's, they're used as primary endpoints. For example, one of them is called the ADOSCOG. It's an assessment that has kind of a free-form interview component between the participant and a clinician. And you kind of just score linguistic ability or linguistic impairment on a scale of, let's say, zero to five, but it's really hard to calibrate that across clinicians or across folks who are administering that assessment. So one of the limitations is that It is subjective. It can vary a lot. Of course, there's a lot of value in, I think, the clinician's judgment and experience. But that type of variability across different individuals who are administering can actually greatly reduce the outcomes or the strength of the data in the clinical trial. So you can think of a trial where maybe you have a treatment that actually is effective, maybe a small effect. But if there is a lot of noise and variability in the data, that can actually mask that effect. And the overall trial, the overall outcomes may not be statistically significant. And so it could be the differentiating factor between a successful and a failed trial, which is where we see the value of these types of more objective assessments. And we're not trying to replace the primary endpoints, but really supplement. My co-founder Liam would often say there's a lot of value in having a basket of evidence, especially because, you know, there's so many aspects to cognition. So you want something that, like what we're providing is very sensitive to changes in speech, and you can supplement those other assessments. And I think together have a better picture of how someone's doing.

Damon Adachi: 13:29.286 - 13:53.270


So that's extremely interesting. And what I'm trying to understand from this, and maybe Jordan, you can share some more light on it from what you mentioned earlier. You can't immediately make an assessment on first blush. You have to sort of have some calibration of your measurements so that you can see a decline or a change in the, in the subject over time. Or is it something that immediately there are metrics that are standardized that it can be measured against?

Jordan Ponn: 13:53.715 - 14:37.431

Well, generally we would record you over time, essentially, you know, cause everyone has a different baseline, right? Everyone has a different level of education. Everyone has a different background, has a different way and approach of how they would go about doing, describing things and how they talk. So generally it's easier if we get a first recording of them first, do some analysis on that. And then every subsequent period after that, we can compare that to the baseline and see how they change over time. Um, we do also have normative data. So basically, uh, across the general populace or whomever we have done analysis on, we can draw on that pool of data and see how you compare against that population. Um, but ultimately speaking, we would want to compare you against yourself. We want to see how you're doing over a period of time.

Maria Yancheva: 14:40.560 - 15:02.909


Yeah, broadly, like some of the main applications of this technology in clinical trials are to check for this change over time, which could be due to changes in the disease itself or due to treatment. So trying to assess the effectiveness of treatment in a clinical trial. And so that's where this comparison from baseline to the end of the trial is really important.

Peter Reynolds: 15:04.821 - 15:11.603


Where are we currently with the technology? How close are we to it going to market?

Maria Yancheva: 15:14.944 - 15:34.291


Well, so in terms of this use case in clinical research, this is where we have found a strong product market fit. This is very much something that we sell and we've been getting a steady stream of revenue from over the years and that has grown. It's an area where we have seen that repeat business and growth.

Damon Adachi: 15:35.717 - 16:08.431


So is your market approach similar to that of something like hearing life, which I have noticed has really picked up a lot of visibility and I'm a little further outside of the metropolitan area and I see these locations everywhere in the sense that population is aging and it's becoming an issue of awareness and people are more inclined to say, perhaps I should get my hearing test more often. Is this something that's sort of a model that you're mimicking and saying we want to be able to offer location that people can go to for testing and for monitoring.

Maria Yancheva: 16:10.673 - 17:23.443


So generally speaking, we work with the pharmaceutical company and the organization that is, they typically contract, it's called a contract research organization that is actually doing the logistics of the clinical trial. So they're the ones who would recruit patients, have sites for folks to come in and do assessments. And they would do, there will be folks called raters. They would actually administer all of these different assessments. So there would be not only winter light, but let's say a number of other assessments as part of the study protocol for the trial. And so our assessment can be done both in clinic at one of those sites by a rater, administering it to a participant, or it can also be done remotely. So that's one of the benefits of this type of technology, both from the perspective of, you know, it's digital and it's easy to use. So it doesn't require some significant amount of training, like someone can actually self-administer depending on whether they are cognitively impaired or not and kind of what the population of the trial is, but in some cases it can be self-administered or administered by kind of a caregiver in a remote location. So that is one of the benefits. Does that answer your question?

Damon Adachi: 17:23.823 - 17:51.566

Yeah, absolutely. No, and you know, I'm sort of thinking bigger picture from the market perspective of how this becomes adopted and accepted and understood and implemented. So I know that's not all of your responsibility from a winter light perspective, but I love the idea that, you know, this could be a significant change in people's understanding of these issues and how they can be caught earlier and dealt with. So absolutely great answer.

Peter Reynolds: 17:51.726 - 18:06.504


I think. Yeah. No, I was just going to say, Damon, I think that that goes to scalability, you know, and that idea of scaling your business like anyone does and looking at where you are now and where you see the technology in the future.

Maria Yancheva: 18:08.788 - 20:08.455


Yeah, so I think people often think about kind of the bigger picture vision of how we can use this, you know, in the real world, how doctors can use this with patients, how we can use it to detect maybe early signs of Alzheimer's in the real world, but it really In this area, in the area of healthcare and research, it really is a very long process of going through rigorous validation. And so it really takes many, many years of using this technology. First, in the context of clinical trials, which is where you can do R&D, you can have exploratory endpoints, you can collect data, publish results, and continuously validate this type of technology until it becomes more established. and people feel more comfortable using it alongside more traditional types of assessment. And that can take many years. I think we perhaps underestimated the number of years that that process would take. And that's one of the challenges I think of having a startup in this area compared to just the regular software startup. Like it could literally take a decade to collect the set of data and be in enough trials and have enough clinicians use it and have enough data and evidence to show that this is useful and effective and add something on top of other types of assessments before it can then become something that folks use in the real world, meaning in doctors' offices or, you know, folks can use themselves. In the area of Alzheimer's, it was also challenging because for, you know, for a very long time there haven't been effective treatments. Only in recent years have we seen some changes in that. And so people would often ask, well, you know, if there isn't a treatment, what is the point or what is the business model of having this? And so it kind of has to go hand in hand with developing that side of it, the treatment and therapy side of it.

Damon Adachi: 20:09.969 - 20:34.392


So I hear you saying that it took, you know, it might take longer than you thought in terms of the development stage and, you know, scaling up. Do you think that technology is advancing faster than the market can keep up with in terms of adoption and understanding and acceptance? Uh, you know, I'm Jordan, you must be seeing changes to what you're capable of doing on almost monthly basis.

Jordan Ponn: 20:35.553 - 21:19.218

Oh, yeah, absolutely. I definitely think it's definitely progressing very quickly. And it's very hard to keep up with knowing, like, how should we handle the technology? What are the safeguards we need to have in place to make sure we're not abusing people's data? What, you know, what can be shared, what should not be shared, like all that kind of stuff. Um, so yeah, I'm sure as you're seeing it as particular with a lot of the generative AI stuff, we're not sure whether it is appropriate to, uh, what kind of things are appropriate to publish, what kind of things are appropriate to, uh, remain in the public domain. Um, so there's a lot of challenges in working with AI and trying to figure out like, because it's so new, like what should and should we not do just because we can't do it doesn't mean we should do it. Ah, ethics. Ethics, of course. So annoying.

Peter Reynolds: 21:19.698 - 22:00.944


It's funny, cause I was thinking about that, how, you know, we were talking at the beginning about helping seniors stay more independent and, you know, this idea of having a virtual assistant or a, or a little bot that is with your, your, your mother or father, you know, that's helping them throughout the day. but can actually identify any challenges that they might be facing or any decline that they might be facing. But then of course, they're being recorded all the time. They're being watched all the time. So there's privacy concerns. Where is that data going? So it just seems to open up a whole can of worms that maybe we didn't want to open up.

Maria Yancheva: 22:02.310 - 22:44.310

Yeah, absolutely. I mean, yeah, this is another of the challenges that we've had to really be very, have top of mind and be really careful about how we handle data and really building kind of privacy and security into every process, into every aspect of our software development and other practices within the company. So it's been top of mind from day one. And so, yeah, if you think about What's the hardest type of startup you can do having one that's both research-based and healthcare-based where you're dealing with all of these privacy laws? It definitely has been very challenging, but also the potential impact is very significant, which is why we were motivated to do this.

Damon Adachi: 22:45.450 - 22:57.655


So I think I've lost count of how many different languages we translate this podcast into in our global market. But the question begs, how do you deal with different languages in your process?

Jordan Ponn: 22:58.717 - 24:08.197

That's a good question. Um, so most of the natural language processing research tools available are basically in English. Um, so the challenge there is like, how do we find an equivalence and compare between different languages? Because, you know, some concepts in one language might not appear in another. Uh, for example, uh, English is not a gendered language. So when you're talking about French, Spanish, so on, how do that, how does that compare when someone's speaking those languages versus in English? Like what kind of biomarkers are we looking for? What changes of speech are we looking for? And so on and so forth. The way we basically handled it is we've looked for frameworks and we've gotten linguists to help us identify these frameworks and validate them and explain to us like how they work and how they can be like a common denominator across all languages. So that's why we have so many features because we're basically bringing them down to common denominators so we can numerate and convert to numbers that our systems can use for analysis and for modeling. So basically we can do a language agnostic. Of course, there's always going to be language specific features we can look at specifically, but that will depend on the disease area, the client we're working with, so on and so forth, depending on what we're doing there.

Maria Yancheva: 24:10.458 - 25:15.460

Yeah. Just to add to that. Yeah. One of the expectations, obviously, when folks are doing research or using this technology is that if they have a multinational trial, which we work with, you know, the top 10, top 20 pharmaceutical companies in the world, they have large trials with many different languages. many different geographical locations, they expect to have comparable outcome measures across all of them. And so this challenge that Jordan was talking about, this is something that we've had to effectively spend between six to 12 months in-house to develop this cross-language framework for dealing with speech and language outcome measures in a way that is comparable and that we can extract for different languages. And that's something that, you know, people often ask, can you just build this with open source tools? This is one of the challenges that we've had to actually develop internally by working with linguists, by kind of developing internal IP on how to do this. And it was not a straightforward thing. And, you know, it's something that requires a lot of time and a lot of validation.

Peter Reynolds: 25:16.909 - 25:30.699

That's interesting because it's obviously not just about the technology and the programming, but it's about sort of managing all that talent from multiple industries, multiple professions all around the world that has to have its own set of challenges.

Maria Yancheva: 25:32.099 - 25:50.742

Yeah, absolutely. We need a very multidisciplinary team. And so we need that linguistics expertise in addition to obviously our software folks and machine learning expertise, as well as, you know, the scientific analysis side. So we have a big team of scientists as well within the company.

Jordan Ponn: 25:52.628 - 26:28.987

Yeah. And basically to add to some of the validation work that Maria was talking about, if you imagine nowadays, when you're working with any large language models or generative AI, sometimes if you give it a strange prompt, sometimes it kind of goes off the rails, giving you weird answers, weird results where you know, Hey, this isn't, this isn't quite right, but it'll still try to give you something to try to give you an answer. So that's basically where a lot of our validation work has gone into to make sure that just because it gives us numbers, Is it valid? Is it just making things up? And that's basically where a lot of the work has come in to try to figure out and make sure that it's actually something that we can use and compare across different places of analysis.

Peter Reynolds: 26:33.274 - 27:08.595

Absolutely fascinating. And I think we could talk about this for hours and we would only have scratched the surface. I was wondering if Maria Jordan, if you had any advice for those people like yourselves, you know, who are, you know, working, you know, on their masters at university, you know, they're thinking about having a startup. What's something that you would advise them that you wish you had done when you started? That's an interesting question.

Jordan Ponn: 27:09.075 - 27:53.390

I don't know if I have, I'm not going to say that anything, everything I did was perfect. If anything, my path to where I am right now is very roundabout. My background was originally in mechanical engineering and somehow I'm now in software. I was originally working on trying to do robotics and now I'm working on healthcare technologies to detect early stage Alzheimer's and other neurodegenerative diseases. So basically the furthest away you can get from anything biology or neurology But from that standpoint, basically I would say is just stay curious, be willing and open to try new things. It's not always going to be a planned path from point A to point B. So just go out there, keep trying things, keep learning new things and see where life takes you. Just be open to trying new things.

Maria Yancheva: 27:56.589 - 28:57.301

I was thinking as Jordan was talking, I think one of the biggest challenges that we had was actually from the business model perspective. So we had this technology that everybody was excited about when we had, we actually went through the creative destruction lab at U of T, which helped us a lot. We had amazing mentors. But one thing that was challenging was that everybody had a different idea about how we can apply that technology. And so we had a solution in the search of a problem that was kind of the challenge for us in the first couple of years of Winter Light. And I think one thing that helps is to be just to iterate quickly through product hypotheses. I think we sometimes spend too long because folks would be excited from an innovation perspective, even if there isn't a clear kind of business model or commercial model for the technology. So being able to kind of, I guess, prove or disprove hypotheses quickly and iterate quickly from a product perspective is really important.

Damon Adachi: 28:58.263 - 29:11.945


Oh, okay. Now you're talking my language. We're into figuring out what your true value is and where the market uptake is going to be best. And that's, that's super interesting as well. Fantastic. All right. I got to ask, can you explain the alpaca spirit animal of winter light?

Maria Yancheva: 29:14.301 - 29:20.166

Actually, Jordan is responsible for the existence of this animal. So maybe he can talk about it.

Jordan Ponn: 29:20.186 - 29:55.836

So I guess basically over after socializing, you know, like drinks after work or lunches, you know, we basically talked about some weird animals or favorite animal, things like that, of that nature. And one of Maria's favorite animals was the alpaca. So one time when I was in San Francisco on a vacation, I saw this cute little doll behind her and said, Hey, wouldn't it be kind of cool to bring that back? And since we brought it back, it's kind of been appearing in a lot of our little Easter eggs here and there and some of our media content. I think there's a couple of spots on our website it might pop up on if you look with a very keen eye.

Maria Yancheva: 29:57.608 - 30:12.478


There are some Easter eggs on the website where I think we've been on a call where our scientists were, you know, in a serious call with someone from a pharmaceutical company with clinical scientists and they found this on the website. So that's always a fun conversation.

Jordan Ponn: 30:12.538 - 30:16.080

That's brilliant. It's effectively become our unofficial mascot at this stage.

Damon Adachi: 30:17.552 - 30:18.193


Make it official.

Peter Reynolds: 30:18.673 - 30:58.074

I tell you something, it's our third season and we've talked to so many businesses, so many entrepreneurs, and I love how there's this common thread that regardless of the product you're bringing to market or the service that you're providing, That, you know, again, having things like Easter eggs, who would have thought, you know, the cognitive research, you know, that there would be Easter eggs, that we would have a spirit animal. That still having to worry about marketing and worry about being able to translate what it is you do so that people understand it, you know, and can trust it. That it's just as important in this industry as any other.

Damon Adachi: 30:59.406 - 31:16.294

Humanizing it. Absolutely. Technology is fabulous. Science is incredible, but it's the human element of what we're doing. It was mentioned earlier about being in business of hope. I can feel it from you guys. I can understand that you're in this to do something better, make the world a better place. Uh, and kudos to you. Great, great stuff.

Peter Reynolds: 31:17.984 - 31:33.939

Thanks so much. Maria, Jordan, thank you so much for sharing your expertise with us today. And we've learned so much about the groundbreaking technology of winter light labs and the challenges of startups like yours face. You know, it's been a really insightful conversation. Thank you so much for joining us.

Maria Yancheva: 31:35.480 - 31:36.641

Thank you so much for having us.

Peter Reynolds: 31:36.721 - 32:10.561

It was great to meet you. And thank you, of course, to my spirit animal, Damon Adachi, for bringing your unique perspective to the table, as always. And of course, finally, a big thank you to our audience. Your support makes all of this possible, and you can catch Pros in Conversations wherever you listen to your podcast, or you can watch the video version on YouTube and see our beautiful faces. And don't forget to subscribe and leave a review. So for Maria, Jordan, and Damon, I'm your host, Peter Reynolds. You've been listening to Pros & Conversations, and we'll see you next time.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Community Living Out Loud Artwork

Community Living Out Loud

Community Living Mississauga
TECH TAKES Artwork

TECH TAKES

OACETT
Engineering The Future Artwork

Engineering The Future

Ontario Society of Professional Engineers
Architecturally Speaking Artwork

Architecturally Speaking

Ontario Association of Architects