Collected molecules will appear here. Add from search or explore.
Real-time teletaction system mapping high-resolution 3D data from vision-based tactile sensors to compliant shape-changing haptic displays.
Defensibility
citations
0
co_authors
2
Feelit addresses a significant bottleneck in teleoperation: the translation of high-fidelity tactile sensor data (input) into human-perceptible haptic feedback (output). While vision-based tactile sensors like Meta's DIGIT or GelSight have matured, the 'display' side of the loop remains crude. This project bridges that gap by mapping 3D sensor point clouds to a compliant shape display. From a competitive standpoint, the defensibility is low (3/10) because the repository is primarily an academic artifact with zero stars and minimal community traction (2 forks), suggesting it serves as a reference for the paper rather than a living ecosystem. The moat exists in the specific hardware-software calibration and mapping algorithms, but these are replicable by other robotics labs (e.g., Stanford's CHARM Lab or MIT's CSAIL). Frontier risk is low because specialized haptic hardware for teleoperation is too niche for current LLM-focused labs like OpenAI or Anthropic. However, it faces 'platform' risk from robotics-heavy firms like Tesla or Figure if they decide to verticalize their teleoperation stacks. The long displacement horizon (3+ years) is due to the inherent slow pace of hardware-dependent robotics research and the lack of standardized haptic display hardware in the market.
TECH STACK
INTEGRATION
reference_implementation
READINESS