Menu
Chris Lehane: A Name Synonymous with Strategy, Innovation, and Success 

Chris Lehane: A Name Synonymous with Strategy, Innovation, and Success 

Image Credit: Photo by Benjamin Suter on Unsplash

Chris Lehane, the Chief Strategy Officer at Haun Ventures, Lehane has built a formidable reputation, both as a prolific investor and a shrewd strategist in the business world. 

An Insightful Journey 

Before being a part of Haun Ventures, Lehane had an illustrious career in various high-profile roles. His journey includes being an executive at Airbnb and co-founding a strategic consultancy firm, Fabiani & Lehane. His firm has an impressive clientele, spanning political figures, corporate giants, technology firms, entertainment industry, and professional sports personalities. 

Lehane’s experience isn’t limited to the corporate world. He has served in government roles, notably as the press secretary to Vice President Al Gore and special assistant counsel to President Bill Clinton in the 1990s. 

Embracing the Crypto World at Haun Ventures 

Lehane’s career took an exciting turn when he entered the world of crypto at Haun Ventures, a venture capital firm primarily focusing on the web3 world. At Haun, he uses his expertise to help shape the firm’s investment strategy and growth. 

In addition to his role at Haun, Lehane is also a member of Coinbase‘s Global Advisory Council. Coinbase is globally recognized as the second-largest crypto exchange, and Lehane’s association with it only amplifies his stature in the crypto world. 

Investment Strategy: A Deep Dive 

Haun Ventures operates with two investment vehicles – a $500 million early-stage fund and a $1 billion acceleration fund. The firm invests across sectors, identifying the most innovative areas and deploying capital strategically. 

Their approach to investing, especially in a bear market scenario, is a topic of interest for many. Lehane’s insights into the firm’s investment strategy and how he advises Coinbase are eagerly anticipated by the tech and business community. 

Alibaba-Backed Autonomous Driving Star, Deeproute, Targets European Carmakers

Alibaba-Backed Autonomous Driving Star, Deeproute, Targets European Carmakers

Image Credit: Photo by Roberto Nickson on Unsplash


The autonomous driving scene in Europe is about to get shaken up. Deeproute.ai, an autonomous driving startup headquartered in Shenzhen, China, has plans to set up an operations center in Germany by 2024. The company, which has received over $350 million in funding and employs more than 500 employees worldwide, is the latest Chinese mobility upstart to establish a physical presence in Germany, home to some of the world’s largest automakers.

The move marks a significant step in the company’s global expansion strategy, which aims to collaborate with more local automakers and support OEM partners on smart driving mass production.

Unveiling The Plan

The announcement came during the International Motor Show Germany in Munich, where several Chinese mobility players marked their presence. The company has also revealed its plans to roll out its production-ready autonomous driving solution in Germany next year, with other European markets to follow.

Tech Behind The Wheel: Drive 3.0

The solution, dubbed Drive 3.0, works without HD maps and includes features like valet park assist. With a competitive hardware price of $2,000, the system is quite compelling. This affordability is largely thanks to Deeproute’s collaboration with its competitively priced Chinese lidar suppliers. Powering the system is Nvidia’s Drive Orin system-on-a-chip.

The Operations Center: A Mystery Unfolding

Details about the operations center remain sparse at this point. Deeproute has shared that it plans to hire a business development team in Germany, but the specifics about the operations center’s roles and responsibilities, as well as its geographical location, are yet to be disclosed.

Learning From History: Momenta’s Foresight

It’s worth noting that Deeproute is not the first Chinese autonomous driving company to establish a base in Germany. Suzhou-based competitor Momenta had the foresight to open an office in Stuttgart two years ago. This strategic move arguably paved the way for Momenta to foster closer relationships with its investor Mercedes-Benz, as well as other European OEMs.

Nio’s Footprint in Germany: A Case Study

Nio, another Chinese premium electric vehicle startup, already has a 1,500-sqm innovation center in Berlin and a design facility in Munich. In addition to these, it also runs a “Nio House,” a stylish members’ club and showroom in the German capital.

Deeproute’s Strategic Shift: From Robotaxi to Production-Ready Car

Like many ambitious Chinese AV startups, Deeproute initially focused on Level 4 driverless technologies to power robotaxis. However, over time, the company shifted its focus to less advanced driving solutions for auto partners, which could generate immediate cash flow. A representative from the company explained that they have been working with OEMs on mass production since last year, which prompted the shift in focus.

Deeproute’s Expanding Customer Base

As of the end of 2022, Deeproute’s services had provided over 800,000 passenger rides, most of which were executed by its robotaxis in major Chinese cities. With its impending expansion into Germany, the company hopes to grow its OEM customer base, which already includes Seres and Geely. A local business development team will be instrumental in connecting with more local automakers.

The Road Ahead: More Chinese Companies to Follow?

With Deeproute’s planned expansion, it’s possible that more Chinese robotaxi companies will consider venturing into Germany. If successful, Deeproute could potentially pave the way for other companies to follow suit, further solidifying China’s presence in the global autonomous driving landscape.

AI: The Reality of Hallucination in Large Language Models

AI: The Reality of Hallucination in Large Language Models

Image Credit: Photo by julien Tromeur on Unsplash

Artificial Intelligence (AI) has been instrumental in transforming various sectors of industry and society. However, with great power comes great responsibility, and AI is no exception. There’s a peculiar phenomenon associated with AI, particularly Large Language Models (LLMs) like OpenAI’s ChatGPT, that has been causing quite a stir in the tech and business spheres. This phenomenon is often referred to as ‘hallucination’.

The Hallucination Phenomenon

Despite the enormous capabilities of AI, it has a peculiar tendency to generate information that doesn’t exist, or in simple words, to ‘hallucinate’. These hallucinations range from benignly odd to seriously problematic. For instance, ChatGPT once erroneously asserted that the Golden Gate Bridge was transported across Egypt in 2016. This is a simple mistake, and while it may be humorous, it’s indicative of an issue at the core of these models.

In a more serious instance, an Australian mayor threatened legal action against OpenAI when ChatGPT falsely claimed he had pleaded guilty in a high-profile bribery scandal. This misinformation not only has the potential to tarnish reputations but also raises legal and ethical concerns.

Researchers have also discovered that these AI-induced hallucinations can be exploited maliciously. Hackers can manipulate LLMs to disseminate harmful code packages to unknowing software developers. Furthermore, these models have been found to provide incorrect medical and mental health advice, such as falsely suggesting that wine consumption can prevent cancer.

Understanding the Training Process of Models

To comprehend why hallucinations occur, we must delve into how AI models are developed and trained. Generative AI models, including LLMs, essentially function as complex statistical systems that predict data, be it words, images, music, or speech. They lack genuine intelligence, learning from countless examples typically sourced from the public web.

For instance, if an AI model is presented with the phrase “Looking forward…” from an email, the AI might complete it with “… to hearing back” based on the pattern it has learned from countless similar emails. However, it’s important to remember that the AI doesn’t truly understand the sentiment of ‘looking forward’ to something.

Sebastian Berns, a PhD researcher at Queen Mary University of London, explains that the current LLM training framework involves ‘masking’ previous words for context and then predicting which words should replace the concealed ones. This concept is similar to predictive text in iOS, where we continually press one of the suggested next words.

While this probability-based approach generally works well, it’s not flawless. Due to the vast range of words and their probabilities, LLMs can generate grammatically correct but nonsensical text. They can spread inaccuracies present in their training data or mix different information sources, even those that contradict each other.

The Inherent Challenges with AI Models

The issue with hallucination in AI models is not borne from malicious intent. These models don’t possess the capability for malice, and concepts of truth and falsehood are meaningless to them. They’ve learned to associate certain words or phrases with certain concepts, even if those associations aren’t accurate.

“Hallucinations are tied to an LLM’s inability to estimate the uncertainty of its own prediction,” Berns explains. “An LLM is typically trained to always produce an output, even when the input significantly deviates from the training data. A standard LLM doesn’t have a method to determine if it’s capable of reliably answering a query or making a prediction.”

The Quest to Tackle Hallucinations

The challenge that lies ahead is whether hallucinations in AI models can be ‘solved’, and the answer to this is dependent on our understanding of ‘solved’.

Vu Ha, an applied researcher and engineer at the Allen Institute for Artificial Intelligence, maintains that LLMs “do and will always hallucinate”. However, he also believes that there are tangible ways to reduce hallucinations, depending on how an LLM is trained and deployed.

For instance, a question-answering system can be engineered to have high accuracy by curating a high-quality knowledge base of questions and answers, and connecting this knowledge base with an LLM to provide accurate answers via a retrieval-like process.

Ha uses the example of running the question “Who are the authors of the Toolformer paper?” (Toolformer is an AI model trained by Meta) through Microsoft’s LLM-powered Bing Chat and Google’s Bard. Bing Chat correctly listed all eight Meta co-authors, while Bard incorrectly attributed the paper to researchers at Google and Hugging Face.

“Any deployed LLM-based system will hallucinate. The real question is if the benefits outweigh the negative outcome caused by hallucination,” Ha said. In other words, if there’s no obvious harm done by a model that occasionally gets a date or name wrong but is generally useful, it might be worth the trade-off.

Berns highlights another technique that has been used to reduce hallucinations in LLMs: reinforcement learning from human feedback (RLHF). Introduced by OpenAI in 2017, RLHF involves training an LLM, gathering additional information to train a “reward” model, and fine-tuning the LLM with the reward model via reinforcement learning.

Despite the effectiveness of RLHF, it has its limitations. “I believe the space of possibilities is too large to fully ‘align’ LLMs with RLHF,” warns Berns.

Exploring Alternate Philosophies

If hallucination in AI models can’t be fully solved with current technologies, is it necessarily a bad thing? Berns doesn’t think so. In fact, he suggests that hallucinating models could act as a “co-creative partner”, providing outputs that may not be entirely factual but contain useful threads to explore.

“Hallucinations are a problem if generated statements are factually incorrect or violate any general human, social or specific cultural values,” Berns explains. “But in creative or artistic tasks, the ability to come up with unexpected outputs can be valuable.”

Ha argues that we are holding LLMs to an unreasonable standard. After all, humans also “hallucinate” when we misremember or misrepresent the truth. However, with LLMs, we experience cognitive dissonance because the models produce outputs that look good on the surface but contain errors upon further inspection.

AI: The Unanticipated Solution to the Opioid Crisis?

AI: The Unanticipated Solution to the Opioid Crisis?

The opioid crisis has been a perplexing issue, baffling scientists for nearly two decades as they have strived to comprehend the ever-changing societal and systemic reasons that induce people to misuse opioids and to pinpoint prospective overdose danger zones.

These painstaking and frequently imperfect endeavors unfold as healthcare providers endeavor to deliver secure, efficient therapy, and other resources to those grappling with addiction.

As both scientists and healthcare providers scrutinize the expansive and enduring influence of the opioid crisis, they are now inquisitively investigating AI (Artificial Intelligence) and pondering, Could AI be the key to ending the opioid crisis?

Tech Adoption in Healthcare: A Slow Process

Healthcare is not a sector known for swiftly adopting new trends; it’s notoriously slow in testing and incorporating novel technology. This hesitance has its repercussions. One study implies that the industry forfeits over $8.3 billion annually because of its reluctance or failure to adopt technology such as sophisticated electronic health records.

Public health scientists and biomedical engineers have been discreetly fostering an AI-driven revolution in medicine, with addiction prevention and treatment being the latest beneficiaries.

However, the costs of the opioid crisis extend beyond financial losses. Since 1999, over 1 million people have perished due to drug-related overdoses. In 2021, 106,699 drug overdose deaths were recorded in America, marking one of the highest per capita volumes in the nation’s history. Approximately 75% of all these overdoses were linked to opioid use, which includes prescribed analgesics such as Vicodin and Percocet, along with illicit drugs like heroin.

Despite the Centers for Disease Control and Prevention and the National Institutes of Health investing billions of dollars into outreach, education, and prescription monitoring programs, the crisis has stubbornly persisted.

The Opioid Crisis: The Human Cost

For the past decade, I have been conducting research on the opioid crisis in rural and urban communities across America, including New York City and rural southern Illinois.

Most of my peers concur, albeit reluctantly, that there’s a considerable amount of speculation involved in pinpointing the complex risks faced by drug users. Which drugs will they acquire? Will they inject, snort, or smoke them? Who, if anyone, will they use around, in case they overdose and require assistance?

But that’s not all. Practitioners also regularly grapple with inconsistent federal and state guidelines on effective treatments for opioid use disorder, like suboxone. They also find themselves playing catch-up with increasingly unpredictable drug supplies contaminated with affordable, synthetic opioids like fentanyl, which is largely responsible for recent surges in opioid-related overdose deaths.

While AI advancements like ChatGPT have captured most of the public’s imagination, public health researchers and biomedical engineers have been quietly brewing an AI-infused revolution in medicine, with addiction prevention and treatment being the newest recipients.

AI Innovations in Opioid Crisis Management

Innovations in this space primarily utilize machine learning to identify individuals who may be at risk of developing opioid use disorder, disengaging from treatment, and relapse. For instance, researchers from the Georgia Institute of Technology recently developed machine-learning techniques to effectively identify individuals on Reddit who were at risk of fentanyl misuse, while other researchers developed a tool for locating misinformation about treatments for opioid use disorder, both of which could allow peers and advocates to intervene with education.

Other AI-powered programs, such as Sobergrid, are developing the capacity to detect when individuals are at risk of relapsing — for example, based on their proximity to bars — then connecting them to a recovery counselor.

The most impactful advancements relate to the reduction of overdoses, often triggered by drug mixing. At Purdue University, researchers have developed and piloted a wearable device that can detect signs of overdose and automatically inject an individual with naloxone, an overdose-reversing agent. Another significant development has been the creation of tools to detect hazardous contaminants in drug supplies, which could drastically reduce fentanyl-driven overdoses.

The Potential Pitfalls of AI in Opioid Crisis Management

Despite the immense potential, there are concerns — could facial recognition technology be used to locate people who appear intoxicated, leading to discrimination and abuse? Uber has already taken a step in developing this kind of capacity in 2008, attempting to patent a technology that would detect a drunk passenger.

And what about dis/misinformation, a problem already plaguing chatbots? Might malicious parties embed incorrect information into chatbots to mislead drug users about risks?

The Fine Balance

Since Fritz Lang’s seminal silent film “Metropolis” in 1927, the public has been fascinated by the idea of new, humanlike technology making lives easier and richer. From Stanley Kubrick’s “2001: A Space Odyssey” in 1968 to films like “I, Robot” and “Minority Report” in the early 2000s, though, these hopeful visions have slowly morphed into a kind of existential dread.

It will be up to not just researchers and clinicians, but also patients and the broader public to keep AI honest and prevent humanity’s biggest challenges, like the opioid crisis, from becoming insurmountable ones.