Skip to content

Project Haven

Project Haven began as a research initiative at Johns Hopkins University, born from the intersection of biomedical engineering and clinical neurology, and became a consumer-grade home monitoring system that changed how medically complex people were watched over during sleep. The project's name—coded "Haven" during the grant proposal phase by a member of the original research team—was never replaced, because by the time anyone considered rebranding, every paper, every IRB protocol, every pilot family, and every clinician involved called it Haven, and the name had roots too deep to pull up.

In the Faultlines universe, Haven's significance extended beyond its technical capabilities. It represented the convergence of institutional research excellence and lived disability experience—a device shaped not only by engineers and neurologists but by a medical student named Logan Weston who understood what monitoring technology needed to do at 3 AM because he lived in a body that required monitoring and loved people whose bodies required more.

Development and Origin

The Problem

By the late 2020s, home monitoring for medically complex individuals remained dangerously fragmented. A family caring for someone with epilepsy, sleep apnea, and intellectual disability might use a baby monitor for video, a pulse oximeter clipped to a finger for blood oxygen, a seizure detection wristband for movement, and a CPAP machine with its own data stream—none of which communicated with each other. The caregiver had four different apps, three different alert tones, and no unified picture of what was happening. A drop in SpO2 combined with unusual movement combined with elevated heart rate meant something very different from any one of those signals alone, but no existing system could synthesize them.

The population most at risk of SUDEP (Sudden Unexpected Death in Epilepsy)—people with severe epilepsy and intellectual disabilities—was also the population least likely to tolerate wearable monitoring devices. Wristbands triggered sensory distress. Adhesive sensors caused skin irritation. Electrode-based systems required application by trained caregivers and were impractical for nightly use over months and years. The devices that could save lives were devices the people who needed them most couldn't wear.

Project Inception (~2028–2029)

Project Haven originated as a collaboration between the Johns Hopkins Department of Biomedical Engineering and the Department of Neurology's epilepsy division. The initial grant proposal focused on reducing SUDEP risk through continuous home monitoring that eliminated the need for body-worn sensors. The core research question was whether contactless sensing technologies—radar-based respiration and heart rate monitoring, AI-powered video movement analysis, and ambient environmental sensors—could be integrated into a single system capable of detecting seizure activity, respiratory events, and physiological deterioration during sleep with sufficient sensitivity and specificity to be clinically useful.

The project was coded "Haven" during the proposal phase. The name stuck.

Logan Weston's Involvement (~2030–2034)

Logan Weston entered Johns Hopkins School of Medicine around 2030, carrying a résumé that no other first-year medical student could match: a background in community health advocacy since age fourteen, early research experience in the Hopkins CRISPR lab at age thirteen, published work on diabetic neuropathy and post-trauma recovery, and—critically—lived experience as a wheelchair-using person with chronic pain, a traumatic brain injury, and deep personal connections to the epilepsy community through Jacob Keller, whose seizure management Logan had been involved with since adolescence.

Samir Panda, the postdoc who had conceived Haven and led its research team, recognized Logan's unique position at the intersection of clinical medicine, disability experience, and caregiving knowledge, and invited him to join the project in an advisory capacity. The invitation was carefully framed: as much involvement as Logan wanted, with the explicit understanding that he could scale back at any time without consequence.

Logan's initial hesitation was not insecurity. It was the specific wariness of a Black disabled man who had learned that being "the disabled voice" on a research project could become a trap—lived experience reduced to a line item on a grant application, consultation that amounted to a photo opportunity while the engineering team made the actual decisions. He had watched disability "consultation" happen at institutions that should have known better. He didn't want to be the wheelchair in the brochure.

Samir's framing—"the door is open and it stays open regardless of how far you walk through it"—told Logan this might be different. Charlie Rivera reinforced the message with characteristic directness, and Logan's broader circle of friends and family echoed the sentiment: this was where he literally shone. The intersection of technology, patient care, and lived disability experience was not a niche Logan had been assigned. It was a space he had been building toward his entire life.

Logan attended one meeting. He made one observation—the exact contribution varies by account, but multiple team members later credited a single insight that redirected the project's engineering approach. The most frequently cited version: Logan told the team that the system could not be wearable, because the people who needed it most were the people who would not tolerate something on their wrist. This was not a theoretical observation. It came from years of watching Jacob Keller's sensory overload during seizure monitoring, from understanding Sofia Medina's DS-related sensory sensitivities through Cisco, and from his own experience with medical devices that worked in controlled settings and failed in real lives. The engineering team went quiet. Then they started asking real questions. Logan came back for the next meeting, and the next, and somewhere between the third meeting and the sixth he stopped being the clinical consultant and became part of the team. Not because anyone gave him a title. Because the work needed him and he showed up and the showing up became the role.

Over the following years of medical school, Logan's involvement deepened organically. He contributed to clinical validation protocols, provided insights on what families actually needed versus what engineers assumed they needed, and advocated for design decisions that centered the caregiving experience—the person holding the monitor at 4 AM, not just the person sleeping in front of it. His understanding of alert tier design was particularly influential: he argued that existing monitoring systems failed because they treated all alerts equally, when the caregiver needed to know instantly whether a sound meant "check when you can" or "run."

Clinical Validation and Consumer Release (~2031–2035)

The clinical validation phase involved pilot deployments in homes of families caring for people with epilepsy and intellectual disabilities—the population the system was designed to serve. Feedback from pilot families shaped iterative refinements: the camera's infrared illumination was adjusted to avoid triggering photosensitive seizures, the radar sensor's range was calibrated for rooms of varying sizes, and the alert system was redesigned multiple times based on caregiver feedback about what was useful versus what was noise.

Charlie Rivera and Jacob Keller both volunteered as testers of their own accord—not because Logan recruited them, not because the project needed subjects, but because they saw what the team was building and understood it mattered and decided their bodies could contribute. Charlie's sleep profile provided an exceptional stress test for the system: his CFS/ME meant non-restorative sleep even when the architecture appeared normal, his POTS created autonomic instability visible in heart rate variability, his sleep apnea required clean CPAP data stream integration, and his EDS meant his body moved in ways that a standard movement-analysis AI might misread as pathological. If Haven could handle Charlie's sleep, it could handle almost anyone's. Jake's epilepsy—generalized with mixed seizure types, including the myoclonic jerks that occurred in sleep-wake transitions and the tonic-clonics the system absolutely had to catch—combined with autism-related sensory sensitivities that had already ruled out wearable monitoring, made him exactly the user the contactless approach had been designed for. His partner Elliot Landry, who had served as Jake's primary seizure first responder for years, suddenly had data: not "I think he had two seizures last night because I woke up twice" but timestamped respiratory and movement logs showing exactly what happened and when.

Riley Mercer quietly volunteered as well, without fanfare and without needing to be asked. Riley's narcolepsy and asthma provided another unique data profile the team hadn't anticipated: narcolepsy meant the sleep architecture was fundamentally different from anyone else's, with REM intrusions and sleep-wake boundary dissolution that the AI needed to learn to distinguish from seizure-related events, and asthma meant respiratory pattern analysis became more complex—Haven needed to differentiate an apnea event from an asthma flare from normal respiratory variation. Riley's data taught the system things Charlie's and Jake's couldn't.

The consumer version of Haven reached the market around 2033–2034, manufactured under Hopkins licensing by a medical technology company (TBD). It was marketed not as a baby monitor or a general wellness device but as a medical-grade home monitoring system for people with complex health needs—a category that had not previously existed in consumer technology.

Specifications and Function

The Haven Home System consisted of several integrated components:

Contactless Radar Sensor: A millimeter-wave radar unit that read respiration rate, heart rate, and gross body movement through the air, without any sensors on the body. The radar operated at frequencies that penetrated bedding and clothing, detecting chest wall movement for respiration and micro-movements for cardiac rhythm. The contactless design was Haven's core innovation—the feature that made the system usable for people with sensory sensitivities, intellectual disabilities, or motor impairments that made wearable devices impractical.

Infrared Camera with AI Movement Analysis: A low-light camera with machine learning algorithms trained to distinguish seizure activity (tonic-clonic movements, tonic stiffening, rhythmic myoclonic jerks) from normal sleep movements (position changes, restless leg movements, REM-associated twitching). The AI learned the individual user's baseline movement patterns over a calibration period, flagging deviations from that person's normal rather than comparing against population averages.

Ambient Environment Sensors: Room temperature, humidity, and air quality monitoring. For someone like Sofia, whose thermoregulation was affected by hypothyroidism and whose respiratory system was vulnerable to dry or excessively warm air, environmental data provided context for physiological changes.

CPAP Integration: Haven could read the data stream from compatible CPAP and BiPAP machines, correlating apnea-hypopnea events with the system's own respiratory and cardiac data. This integration meant caregivers could see not just that the CPAP was running but whether it was working—whether oxygen saturation was holding, whether the mask had been displaced, whether breathing patterns were normalizing.

Optional Pulse Oximeter: For users requiring higher-precision SpO2 monitoring than contactless methods could provide, Haven supported integration with a finger-clip or adhesive pulse oximeter. In the Medina household, this was used during Sofia's illnesses when respiratory compromise was a concern, but not during routine nightly monitoring.

Caregiver Dashboard and Alert System: A unified app and optional dedicated display that synthesized all data streams into a single interface. The alert system used three tiers—green (all normal), amber (parameter deviation, check when convenient), and red (urgent, respond now)—with distinct audio tones for each tier. Alerts could be pushed simultaneously to multiple devices, allowing Claudia, Cisco, and Michelle to all receive notifications. Historical data was logged automatically, providing a record for medical appointments that replaced Claudia's handwritten notebook for overnight monitoring (though Claudia kept the notebook anyway, because Claudia trusted her own handwriting more than she trusted any device).

Two-Way Audio: Allowed a caregiver to speak through the room unit, so that "aquí estoy, princessita" could travel from the kitchen to the guest room without anyone leaving the stove.

Interoperability and User Independence

Haven was designed to integrate with the technology ecosystems disabled people had already built for themselves—not to replace them, not to compete with them, and not to demand that anyone learn a new platform in order to stay alive overnight.

This interoperability reflected one of Logan Weston's core design principles: disabled people already had their tech configurations. They had already set up their iPhones with VoiceOver or their Android with TalkBack or their smartwatches with health monitoring. A system that forced them into a proprietary ecosystem or required a specific device brand created more work for people who already carried too much. Haven needed to fit into the life the person already had, not demand they rebuild around it.

Data Standards and Health System Integration

Haven's data architecture was built on HL7 FHIR (Fast Healthcare Interoperability Resources)—the same open standard used by major electronic health record systems including Epic, Cerner, and their successors. This was not a marketing decision. It was a clinical one. Logan insisted that Haven data be legible to the systems that already held the user's medical record, because data that lived in a standalone app and couldn't reach the user's doctor was data that existed in a vacuum.

The FHIR foundation enabled several critical capabilities:

Import: Haven could pull relevant clinical context from participating health systems—medication lists, known conditions, seizure history, allergy alerts—so that the monitoring AI had context for what it was observing. A heart rate spike in a user taking a beta-blocker meant something different than a heart rate spike in a user who was not. The import was read-only, required explicit user or caregiver authorization, and updated periodically rather than in real time to respect both bandwidth and privacy.

Export: Haven's logged data—sleep reports, seizure event timestamps, respiratory summaries, distress inference flags, CPAP compliance records, environmental trends—could be pushed electronically to participating health systems. For health systems running Epic, this meant Haven data could flow directly into the patient's MyChart record and appear in the provider's clinical dashboard as structured flowsheet data, available for the doctor to review before an appointment rather than relying on a caregiver's verbal report or handwritten notebook. For systems running eClinicalWorks (ECW), athenahealth, or other FHIR-compliant platforms, the same data exchange was available through standardized APIs.

User-Controlled Sharing: All data sharing was opt-in, explicitly authorized, and revocable. The user or their legal guardian controlled which health systems received Haven data, what categories of data were shared (a user might share seizure logs but not movement video, for example), and could revoke access at any time through the Haven app. Logan insisted on a consent architecture that was granular enough to be meaningful—not a single "share everything" toggle but category-level controls that respected the reality that some health data was more sensitive than other health data, and that the person generating the data deserved to decide where it went.

HIPAA Compliance: Haven was designed from inception as a HIPAA-compliant system. Logan's involvement ensured this was not an afterthought bolted on during regulatory review but a foundational design constraint. All data was encrypted at rest and in transit. The system met the technical safeguard requirements of the HIPAA Security Rule, including access controls, audit logging, transmission security, and integrity controls. Cloud-stored data (for the historical logs and multi-device synchronization) resided on HIPAA-compliant infrastructure with signed Business Associate Agreements. The Haven team underwent HIPAA compliance review as part of the FDA clearance process, and Logan personally reviewed the privacy architecture with the same clinical precision he brought to everything else—because he understood that for disabled people, whose medical data was already used against them by insurers, employers, and institutions, privacy was not an abstract principle. It was protection.

TEFCA Participation: By the time Haven reached the consumer market, the system participated in the Trusted Exchange Framework and Common Agreement (TEFCA)—the nationwide health data exchange network that allowed standardized, secure data sharing across organizational boundaries. This meant a Haven user whose primary care doctor was in New York and whose neurologist was at Hopkins in Baltimore could have their sleep and seizure data available to both providers without manual transfer, faxing, or the user having to remember to bring a printout to each appointment.

Personal Device Integration

Haven paired with iOS and Android devices without requiring proprietary hardware. The caregiver app was accessible by design: screen readers (VoiceOver, TalkBack) could navigate every screen, voice control could operate the dashboard, and the interface followed WCAG accessibility guidelines not as a compliance checkbox but because the people most likely to use Haven—disabled people and their families—were also the people most likely to depend on accessibility features. Data synced bidirectionally with Apple Health and Google Health Connect, meaning a user who already tracked their health metrics on their phone could see Haven data alongside their step counts, medication logs, and heart rate trends in a single unified view. Wearable data from Apple Watch, Fitbit, or other compatible devices could supplement Haven's contactless readings when the user chose to wear one.

User Independence and Data Routing

Critically, Haven did not default to caregiver mode. The system offered flexible data routing that respected the user's level of independence. For someone like Sofia Medina, who could not interpret her own health data, all alerts and logs routed to her caregivers—Claudia, Cisco, Michelle. For someone like Charlie Rivera, who was an independent adult managing his own complex health, the data routed to Charlie himself and to Logan Weston as his partner and physician—because Charlie wanted to see his own sleep patterns and Logan wanted the clinical data, and both of those needs were equally valid. For Jacob Keller, alerts routed to Jake, to Elliot Landry, and to Ava Keller—a configuration that respected Jake's autonomy while acknowledging that during a tonic-clonic seizure, Jake couldn't help himself and needed the people around him to have the information in real time.

The system adapted to the user rather than assuming a default. Not every disabled person needed a caregiver watching their data. Not every disabled person could watch their own. Haven honored both realities without privileging either one.

Distress Inference: Pain Detection in Users with Limited Communication

A later feature addition—developed during the clinical validation phase and refined through post-release updates—gave Haven the ability to analyze heart rate patterns, movement changes, respiratory irregularities, and facial muscle tension (visible on infrared camera) to infer potential distress in sleeping users. The feature was designed specifically for users with limited verbal or motor communication—people who could not reliably articulate their own pain or discomfort, whether due to intellectual disability, nonverbal status, post-ictal confusion, or unconsciousness.

Logan Weston's position on pain inference was characteristically precise and uncompromising: the system inferred potential distress. It did not diagnose pain. The distinction mattered. A heart rate spike during sleep could indicate pain, but it could also indicate a nightmare, a seizure aura, gastric reflux, anxiety, or a dozen other physiological events. The system provided data. The caregiver provided interpretation. And the user—if they could communicate in any modality, through any channel, by any means—always had the final word.

"The system is a bridge, not a replacement," Logan stated during a clinical advisory meeting. "It tells you something might be wrong. Your job is to ask. And if they can't answer, your job is to look harder—not to let the algorithm decide for them."

The feature used a tiered confidence system: low-confidence distress indicators triggered an amber informational alert ("Haven detected elevated heart rate and restlessness—check when convenient"), while high-confidence indicators—sustained heart rate elevation combined with facial tension, respiratory changes, and movement patterns consistent with pain response—triggered a red alert. The system learned each user's individual baseline over time, reducing false positives by flagging deviations from that specific person's normal rather than comparing against population averages.

Logan also acknowledged the harder truth: some users genuinely could not articulate pain in any modality. For someone like Sofia Medina, who could say "no me siento bien" and pull on her ear, the system supplemented existing communication. But for users with more severe intellectual disability, or users who were nonverbal without AAC, or users who were unconscious or post-ictal, the system's ability to flag physiological distress patterns might be the only signal the caregiver received. In those cases, the algorithm was not replacing communication. It was providing a minimum where otherwise there would be nothing.

The ethical tension between respecting user autonomy and acknowledging that some users could not exercise it was addressed directly in Haven's documentation and in Logan's published commentary on the system. The feature was opt-in, required explicit caregiver consent and clinical recommendation, and came with guidance emphasizing that algorithmic distress inference should never replace direct observation, communication attempts, or clinical judgment. Logan insisted this language appear not in a buried settings menu but on the feature's activation screen: "This tool supplements your attention. It does not replace it."

The Physical Object

The Room Unit

The Haven room unit was a single integrated housing approximately six inches wide, four inches tall, and two inches deep—roughly the footprint of a hardcover book stood on its spine. The casing was soft-touch matte plastic in warm gray, with a slightly curved front face that softened the industrial geometry. It did not look clinical. It did not look like surveillance. It looked like something that belonged in a room where someone slept—closer to a high-end smart speaker than to anything with a medical supply catalog number.

The front face contained the infrared camera lens (a small dark circle, no larger than a pencil eraser, recessed slightly to avoid reflection), the radar sensor array (invisible behind the plastic housing—no external indicator of its presence), the ambient light and environmental sensors (a barely perceptible strip along the bottom edge), and a single LED status indicator: a small, soft-glow dot that pulsed slowly in muted teal when the system was active and monitoring. The teal was chosen deliberately during the design phase—warm enough to not read as clinical, dim enough to not disturb sleep, visible enough that a caregiver glancing into a dark room could confirm at a glance that the system was running. The LED dimmed automatically in response to room darkness and could be turned off entirely for users with light sensitivity.

The two-way audio speaker and microphone were integrated into the housing—no visible grille, just a series of micro-perforations in the casing that allowed sound through without breaking the visual surface. The speaker was engineered for vocal clarity at low volume: warm, not tinny, capable of carrying a human voice across a dark room without sounding like an intercom.

The unit mounted via a simple bracket system—two screws into a wall or shelf, the unit clicking into the bracket with a magnetic attachment that held firm but allowed easy removal for cleaning or relocation. It could also sit freestanding on a shelf or dresser top, weighted slightly at the base to prevent tipping. The power cord was USB-C, running to a standard wall adapter, with a six-foot cord length that gave flexibility in placement. The unit had no internal battery for continuous operation—it required wall power—but contained a small backup cell that maintained Wi-Fi connection and sent a "power lost" alert to all linked caregiver devices if the power was interrupted, so nobody woke up to a silent room and wondered if the system had been watching.

The unit came in two colorways: the standard warm gray and a softer warm white for rooms with lighter décor. Both were deliberately neutral—designed to disappear into a room rather than announce their presence. For families who had spent years surrounded by medical equipment that declared itself in beige plastic and flashing lights, Haven's visual restraint was not an aesthetic choice. It was a philosophical one. The device served the person sleeping in the room. It did not need to remind them, or anyone else, that they were being monitored.

The Caregiver Display

The optional dedicated display was a seven-inch touchscreen tablet with the same soft-touch matte finish as the room unit, designed to sit on a kitchen counter, a coffee table, or a nightstand. It showed the room view in real time—infrared footage of the sleeping space, the CPAP unit visible if present, the form of the person under the blankets rendered in the gray-green tones of night vision. The vitals overlay—respiration rate, heart rate, room temperature, humidity—updated continuously in the lower corner of the display, small enough to be unobtrusive but readable at a glance.

The display's screen brightness adjusted automatically to ambient light. In a dark bedroom at 3 AM, the screen dimmed to near-black, showing vitals in a whisper of light that wouldn't wake a sleeping partner. In a lit kitchen during the day, it brightened to full readability. The touchscreen responded to standard gestures: tap for detail on any vital, swipe for historical data, long-press for settings. The interface was designed for one-handed operation and for tired hands—large touch targets, no small buttons, no gestures that required precision when precision was the first thing exhaustion took.

In the Medina carriage house, the display lived in the kitchen where Claudia could see it while she cooked. The room view showed the guest room sofa bed, the CPAP unit on its stand, Sofia's form under the blankets. Claudia checked it the way she checked the stove—automatically, without thinking, a glance that confirmed everything was where it should be. Cisco had the app on his phone. Michelle had the app on hers. The display was Claudia's, because Claudia trusted what she could see with her own eyes more than she trusted what a phone could tell her.

Accessories

The Haven ecosystem included several optional accessories, available separately, that extended the system's capabilities for users whose needs or preferences went beyond the base unit's contactless monitoring.

Haven Band (Wearable Armband): A slim, flexible armband worn on the upper arm or wrist that provided continuous pulse oximetry (SpO2), heart rate, skin temperature, and electrodermal activity data—higher-precision biometric monitoring than the contactless radar could achieve alone. The band communicated wirelessly with the Haven room unit and integrated seamlessly into the same dashboard and alert system. It was designed for users who could tolerate a wearable: the band was thinner and lighter than most consumer fitness trackers, with a soft hypoallergenic silicone strap and no rigid housing pressing against the skin. The strap width was adjustable through interchangeable sizes, and the band's outer face could be customized—color options included standard black, white, and gray, as well as a range of brighter options. Custom and third-party strap covers were available, because the design team recognized early that a device people had to wear every night needed to be something they didn't resent wearing.

Sofia Medina's Haven Band was pink and sparkly. This was non-negotiable. Claudia had ordered the pink strap the day the system was installed, and Sofia had claimed it with the same possessive certainty she brought to everything she loved. She wore it not because she understood it was monitoring her oxygen saturation and heart rate and skin temperature—she wore it because it was pretty and it was hers and it sparkled when the light caught it. The fact that it was also keeping her alive was, from Sofia's perspective, entirely secondary to the aesthetics. The engineering team, had they known, would have considered this the highest possible compliment: the device had become an accessory rather than an apparatus.

Haven Pad (Under-Mattress Sensor): A thin, flexible pressure-sensing mat placed beneath the mattress that provided ballistocardiography (BCG) data—detecting heartbeat, respiration, and movement through mattress vibration without any contact with the sleeper's body. The pad was designed for users who could not tolerate any wearable, including the band, and for whom the contactless radar alone didn't provide sufficient sensitivity. The pad was particularly useful for detecting subtle seizure activity that the camera might miss in a heavily blanketed user, and for providing respiratory data in rooms where the radar sensor's range or angle was suboptimal. The pad was waterproof, machine-washable (removed from the mattress for cleaning), and thin enough that the sleeper couldn't feel it through the mattress.

Haven Clip (Portable Pulse Oximeter): A small finger-clip pulse oximeter that connected to the Haven system via Bluetooth for high-precision SpO2 monitoring during acute illness or respiratory concern. Unlike the Band, the Clip was not intended for nightly use—it was an escalation tool, pulled from the supply drawer when the user was sick and respiratory compromise was a concern. In the Medina household, the Clip came out during Sofia's colds and respiratory infections, when the combination of her immune vulnerability and sleep apnea made oxygen monitoring a clinical priority rather than a background data point.

Haven Go (Travel Unit): A compact, battery-powered version of the room unit designed for travel and temporary locations—hotel rooms, hospital stays, family visits, respite care facilities. The Go unit contained the radar sensor and environmental monitors but not the infrared camera (to simplify setup and reduce privacy concerns in unfamiliar environments). It paired with the same user profile and caregiver dashboard as the home units, maintaining continuity of monitoring and data logging across locations. Battery life was approximately twelve hours on a full charge—enough for a single overnight—with USB-C charging from a wall outlet or any standard external power bank, because hotel rooms didn't always have conveniently placed outlets and hospital bedsides were already a tangle of cords. The Go unit was smaller than the home unit, roughly the size of a deck of cards, and came with a foldable stand and a travel case. For families who traveled with medically complex members, the Go unit meant the monitoring didn't stop at the front door. Logan's design principle applied here too: the system followed the person, not the other way around.

Haven Pack (Travel and Storage Bag): A purpose-built bag designed to carry the full Haven travel kit—Go unit, Band, Clip, power bank, charging cables, foldable stand—alongside the user's other medical supplies. The Pack was not an afterthought accessory; it was designed from the ground up with input from caregivers and users who already traveled with medical equipment and knew exactly what failed about every other bag they'd tried. The main compartment was padded and organized with elastic-secured slots for each Haven component, so nothing rattled, nothing shifted, and everything could be located by touch in a dark hotel room. Additional zippered compartments held medication, CPAP supplies, extra straps, pulse oximeter, and whatever else the user's medical kit required. The exterior was water-resistant, durable, and unremarkable—it looked like a day bag, not a medical supply carrier, because the people who used it had spent enough of their lives with equipment that announced their disability to strangers.

Critically, the Haven Pack was wheelchair-mountable. It attached to the back of a manual or power wheelchair via universal straps with quick-release buckles, sitting flush against the seatback without interfering with push handles or tilt mechanisms. This was not a feature request that came from a focus group. It came from Logan Weston, who had spent his adult life navigating the world in a wheelchair and who understood that a bag designed for disabled travelers that couldn't be mounted on a wheelchair was a bag designed by people who had never met a disabled traveler. The mounting system was tested with multiple wheelchair models during the validation phase and refined until it worked on everything from lightweight sport chairs to heavy-duty power chairs with recline functions.

All accessories synced automatically with the Haven room unit and caregiver app. No separate setup was required—pairing was handled through the app, and data from accessories was integrated into the same unified dashboard alongside the contactless sensor data. Accessories could be added or removed at any time without recalibrating the base system.

The Sound

Haven's default operating mode was silent. Unlike medical monitors that beeped continuously, Haven only produced sound during alerts. The three tiers had distinct tones designed to be distinguishable even through sleep: green acknowledgment (a soft chime, used only when manually requested), amber (a two-tone ascending note, meant to wake a light sleeper without triggering panic), and red (a sharp, insistent three-pulse tone that cut through anything).

The two-way audio introduced a different kind of sound: the human voice. When Claudia said "aquí estoy, princessita" through the Haven speaker, Sofia heard her mother's voice in the room even though her mother was in the kitchen. The sound quality was warm enough—not tinny, not compressed—that it functioned as genuine vocal presence rather than intercom.

In the Medina Household

Haven was installed in the guest room of the Medina Carriage House around 2034–2035, replacing the consumer baby monitor that had previously been used to monitor Sofia during naps and overnights. The installation was recommended by Logan Weston, who understood Sofia's specific monitoring needs (epilepsy, sleep apnea, CPAP compliance, immune vulnerability) and had been involved in Haven's development. The system was funded by Ezra Cruz, who had it delivered and installed without discussion, because Ezra didn't discuss the things he did for the people inside his perimeter—he just did them.

The transition from baby monitor to Haven was seamless for everyone except Claudia, who spent the first two weeks distrustful of any device that claimed to know more about her daughter's sleep than she did. Michelle ran interference, walking Claudia through the app, showing her the display, demonstrating how the alert tiers worked. Cisco let his mother adjust at her own pace and kept his phone alerts set to maximum sensitivity regardless. Logan provided a single-page guide in Spanish—typed, not handwritten, because his handwriting was affected by his TBI—explaining what each alert meant and what to do. Claudia kept the guide on the refrigerator and eventually stopped needing it.

Multi-Unit Architecture: "It's Not a Haven; It's a Cage"

One of Logan Weston's most consequential contributions to Haven's development was his insistence that the system support multiple units linked to a single patient profile. The engineering team had originally designed Haven for deployment in a single room—one unit, one location, one patient. Logan's response became one of the project's defining quotes and a phrase that would later appear in the system's own marketing materials: "If it only works in one room, it's not a haven. It's a cage."

The insight came from lived experience on multiple fronts. Logan moved between spaces constantly—his home with Charlie, the band house, Hopkins—and understood what it meant to depend on equipment bolted to one location. He knew that Sofia Medina's CPAP traveled with her every time she visited the carriage house because Claudia packed it, along with the medication, the pillow, and everything else Sofia's body required. If the monitoring system didn't travel too, then the most dangerous hours of Sofia's day—sleeping in a room where the acoustic calibrations were different and the environmental baselines were unfamiliar—were the hours she was unprotected. Disabled people didn't stay in one place. They visited family. They traveled. They slept in guest rooms and on couches and at their brother's house on Sundays. A system that couldn't follow them wasn't monitoring; it was tethering.

The multi-unit architecture that resulted allowed multiple Haven units to connect to a single patient profile. Each unit calibrated independently to its room's acoustics and dimensions, but all units fed into the same caregiver dashboard, the same alert routing, and the same historical data log. When Sofia was at Claudia's apartment—her primary residence—Claudia's unit was active. When Sofia stayed over at the carriage house, the carriage house unit was active. The system detected which unit was in use based on the radar sensor registering a person in the room. Cisco, Michelle, and Claudia all had the app on their phones regardless of which location Sofia slept at, because Cisco checked the app from the band house main building even when Sofia was asleep thirty feet away in the carriage house, and Claudia checked the app from home when Sofia was at Cisco's, because mothers didn't stop monitoring just because someone else was on duty.

In the Medina family's case, two units were deployed: one in Claudia and Sofia's apartment (the primary unit, where Sofia slept most nights) and one in the guest room of the Medina Carriage House. Both were funded by Ezra Cruz. Both were recommended by Logan. The installation at Claudia's apartment happened first; the carriage house unit followed within the week.

In Daily Use

Sofia did not know what Haven was or what it did. The unit on the shelf was furniture, like the lamp or the CPAP. She never interacted with it, never noticed it, and was never bothered by it—which was, from the engineering perspective, exactly the point. The best assistive technology was the kind the person it served never had to think about.

Legacy and Lasting Impact

Project Haven represented a shift in how home monitoring technology was conceived—from fragmented single-function devices to integrated systems designed around the actual experience of caregiving. Logan Weston's involvement ensured that the system's design centered the people who would use it daily: not just the engineers who built it or the clinicians who prescribed it, but the mothers who checked it at 3 AM, the brothers who carried the phone with alerts set to maximum, the families who had been cobbling together inadequate monitoring solutions for decades because nothing better existed.

The project also represented a model for disability-informed research—not "about us without us" consultation, but genuine integration of lived experience into the engineering process. Logan's contribution was not tokenistic. It was structural, shaping fundamental design decisions that made the system usable for the populations that needed it most.

For the Medina family, Haven was infrastructure. It was the system that let Claudia cook dinner while Sofia napped, that told Cisco whether to check or to run, that logged the data Sofia's doctor needed without requiring anyone to write it down, that carried Claudia's voice from the kitchen to the guest room when Sofia called out upon waking. It was not miraculous. It was practical, daily, and essential—the same words that described everything the Medina family had built to hold Sofia's life together.


Technology Medical Devices Samir Panda Logan Weston Sofia Medina Johns Hopkins