RESHAPE HEALTH GRANT WINNER
AI & Design to Advance Health Equity
Alvee is an AI-driven health equity data management platform that helps practices and health systems take every opportunity to advance health equity and anticipate their patients’ social needs. With their platform, they help to identify and measure health disparities, predict and track social needs, activate data with real-time insights, and promote health equity.
Reshape Health Grant winners
The main reasons they won our Reshape Health Grant were the following:
The market opportunity is clear too
Emerging AI technologies can be of great help to clinicians in their daily tasks, both by automating part of their tasks and augmenting their experience with technology.
The way they are organized around the challenge includes a lot of good signals in terms of startup health
They are close to the potential users and can talk to them, there is a clear and sustainable business model, they are well linked and they are building strong relationships with potential customers and partners.
They also had a good concrete problem we could help them solve
While the LLM they were using was already generating an output from the EHR information and clinical notes, it was too text-based and they hadn’t thought of how end-users (in this case, Social Workers, Physicians, Nurses, and Care Managers) would interact with it.
Regarding this last point, it’s a problem we are seeing more and more with LLMs integration into actual products: it’s not only about using new technology for contexts that make sense, but it’s also about how they are integrated into the existing workflows and tools of actual users.
Our Process focused on user research and the care navigator's journey
One of the things that differentiates us is that we get deep into the problem with desk research, interviews, and mapping activities to better diagnose a problem before jumping into solutions.
We have learned this enables us to propose a more robust action plan, and although it can feel too exhaustive in the initial stages to our customers, they end up appreciating the process because this helps them to reflect on what they are doing and the way they are doing it. What is more: after the process finishes, they end up having a much better-equipped thinking partner from our side, that can connect the dots between technological capabilities, UI ideas, end-user needs, and the contextual constraints they have.
Luckily, the Alvee team, led by Nicole Cook, was already fully into this mindset and they shared with us a lot of materials to process and understand. They have already conducted academic research on the Social Determinants of Health (SDoH) area, so there were even interview transcripts to learn from. We went through all of this with a customized Design Sprint for AI-driven products.
Figure 1: processing interviews from Previous Research to identify different workflows
Figure 2: Processing a Paper we found related to Social Workers' workflows.
The ML Experts participated and proposed ideas visually. We brought some examples related to emerging interaction patterns in the field of AI, and we introduced to importance of remixing ideas from others. This was key for leveling the field of all participants and contributed to exploring a myriad of ideas to select from.
Figure 2: more evolved concepts, remixing original ideas into a curated selection that can blend well with the end-user context and workflows.
Designing AI-based products with LLMs and generative capabilities.
As we have been talking about in our podcast, and in some other articles, designing AI products requires new skills, new design patterns to consider, and new collaborators to include. It’s also important to understand that as a new technology, its introduction to end-users needs extra design care: most of them probably don’t know how it works and won’t care for understanding the details, so if we don’t manage the users' expectations through communication and design, we can end up either disappointing them or scaring them away.
Let’s go through some examples of how this project deals with this:
The shared arena:
In this example, we turned a static list of actions that the ML model outputs into a recommended list of actions that a care manager can accept or manipulate. From the title of the list itself (‘Recommended First Actions”)’, the user can understand this is not being enforced on him. Items pre-identified by AI are marked with Alvee’s sparkle icon, and we designed a custom state for the switch control that indicates ‘uncertainty’. Embracing uncertainty by making it visible is key, as ML outputs are not always certain of the output. Related to this last point, we recommend reading this article from Nielsen about how UX designers need to embrace uncertainty.
Allow for manipulation previous to content generation:
we not only present what will be used for generating the plan, providing transparency to the Social Worker. We also allow him to keep control of what’s considered for the control plan before it’s generated, avoiding extra back-and-forth work later.
Making the most of LLMs, attention, and summaries:
in the medical field, as in many others, most care workers are flooded with tasks and information. They don’t have time to go through extensive pieces of content, so generating summaries and highlighting relevant tokens through LLMs can be a great addition.
Allow for highlights manipulation:
allowing the user to manipulate what has been highlighted by a Large Language model is not only good to keep them in control. It can also be used as a feedback loop for the Model itself. Even if the ML team spent a good amount of time fine-tuning their model, real-world data is always valuable and it’s important to think about how we can automate direct feedback implied from user interaction.