Your data. Your choice.

If you select «Essential cookies only», we’ll use cookies and similar technologies to collect information about your device and how you use our website. We need this information to allow you to log in securely and use basic functions such as the shopping cart.

By accepting all cookies, you’re allowing us to use this data to show you personalised offers, improve our website, and display targeted adverts on our website and on other websites or apps. Some data may also be shared with third parties and advertising partners as part of this process.

Jonathan Cohen / Binghamton University
News + Trends

Guide dog 2.0: This robot talks and guides you to your destination

Kim Muntinga
13.4.2026
Translation: machine translated

Only two per cent of visually impaired people in the USA use a guide dog because real dogs are expensive and rare. A robot from Binghamton University aims to close the gap: It guides, plans routes and talks to you.

A real guide dog understands around 20 commands. It guides, warns and protects, but it does not explain. It can't tell you whether the path to the left is shorter, whether the corridor is clear or how much longer it will take to get to the conference. However, this is exactly what the four-legged robot from Binghamton University can do.

Shiqi Zhang, Associate Professor at the School of Computing in the Thomas J. Watson College of Engineering and Applied Science, and his team have developed a robotic guide dog that uses large language models - specifically GPT-4 - to engage in genuine two-way communication with its users. The system plans routes, explains them before setting off and describes the surroundings in real time during the journey.

It is important to note that the demonstrations shown are not yet a fully autonomous system. The physical movement of the robot is currently monitored and controlled remotely by an expert, while voice interaction, route selection and situation description are already automated.

From line twitching to spoken language

In an earlier iteration, the robot dog reacted to twitches on the lead to change direction at junctions. The new approach goes much further.

Before the tour begins, the robot asks where you want to go. It suggests possible routes, tells you the estimated walking time and waits for your choice. On the way, the robot verbalises the surroundings: it tells you whether there is a long corridor ahead, announces obstacles and gives you a situational overview that a real dog cannot provide.

Real dogs only understand around 20 commands at best. But with robotic guide dogs, you can use GPT-4 with voice commands and have very strong language capabilities.
Shiqi Zhang, Associate Professor, Binghamton University

The system combines two key functions: so-called Plan Verbalisation - route planning before setting off - and Scene Verbalisation, i.e. the ongoing description of the surroundings during the tour. According to Zhang, the latter is particularly valuable, as situational and environmental awareness is severely limited without vision.

The test: 7 blind participants in an office complex

In order to evaluate the system, the researchers recruited seven people aged between 40 and 68 who are legally recognised as blind: two of them with experience as guide dog owners. In a spacious multi-room office building, the robot navigated the test subjects to a conference room. The system asked about the destination, presented possible routes and then guided the participants step by step, with ongoing voice instructions.

Each participant went through three variants: minimal voice interaction, only environmental descriptions during the journey and finally the full system with route planning and real-time commentary. The results of the subsequent survey were clear: The combined variant scored best in the categories of usefulness, ease of communication and helpfulness.

In a supplementary computer simulation, the team tested the system using 77 navigation requests from 16 students: from direct formulations such as «I would like to go to the bathroom» to vague requests such as «I would like to sit down and rest.» With the option to ask questions, the system correctly recognised the destination in 94.8 per cent of cases. The system also proved robust against very noisy voice input - simulated with almost one in three incorrect characters.

A huge supply problem in the background

The research approach not only addresses a technical gap, but also a real social problem: in the USA, only around two per cent of visually impaired people use a guide dog.

The reason is not a lack of demand, but the enormous expense: training a real guide dog takes two to three years and costs around 50,000 US dollars. Less than half of the dogs complete the training successfully. A robotic guide dog could close or at least reduce this gap.

The cost of the robotic guide dog is still unclear. The research team is currently not disclosing the price of the system. In the USA, however, even classic guide dogs are generally not funded by health insurance companies, but are made possible by donations, foundations and charitable organisations.

A technical assistance system of this kind would therefore either have to be publicly subsidised, assessed through new insurance models or privately financed. Without corresponding funding programmes, such a robot would otherwise be virtually unaffordable for many of those affected.

The research team presented their work under the title «From Woofs to Words: Towards Intelligent Robotic Guide Dogs with Verbal Communication» at the 40th Annual AAAI Conference on Artificial Intelligence, one of the largest AI conferences in the world.

Outlook for the future

The next steps are clearly defined: The team is planning further user studies, wants to increase the autonomy of the system and make the robot capable of travelling longer distances both indoors and outdoors.

Only when perception, navigation and movement work together fully autonomously could the current research prototype become an assistance system suitable for everyday use.

Header image: Jonathan Cohen / Binghamton University

4 people like this article


User Avatar
User Avatar

My interests are varied, I just like to enjoy life. Always on the lookout for news about darts, gaming, films and series.


News + Trends

From the latest iPhone to the return of 80s fashion. The editorial team will help you make sense of it all.

Show all

These articles might also interest you

  • News + Trends

    Robot folds laundry, packs parcels - and improvises in the process

    by Kevin Hofer

  • News + Trends

    USM Haller: The art of always surprising anew

    by Pia Seidel

  • Background information

    This robot vacuum cleaner test’s embarrassing, «Kassensturz»!

    by Lorenz Keller

1 comment

Avatar
later