HAPTIC PLATFORM AND ECOSYSTEM FOR IMMERSIVE COMPUTER MEDIATED ENVIRONMENTS
PCT/US2023/029559
[RUBIN, Jacob A., CROCKETT, Robert S., FOLEY, Edward Leo, LEE, Donald Jeong, MARINO, Joseph R., EICHERMUELLER, Michael C., RUBIN, Madeline K., LIU, Joanna Jin, KJOS, Leif Einar, CELLA, Charles Howard, BUTZER, Mitchell Stanley, WEBER, Christopher Todd, MEDEIROS, Benjamin John, BALLEW, Eric Alexander, ROJANACHAICHANIN, Bodin Limsowan, BROOKS, Teyvon John Hershey, BAIRD, Johnathan, REIMER, Keenan]
2701 McMillan Avenue, Suite 160San Luis Obispo, California 93401
A device may receive, by a haptic interface module that is configured to interact with one or more haptic interface devices and an application that generates a computer-mediated environment comprising an avatar corresponding to a user wearing the one or more haptic interface devices, respective sensor data for each respective haptic interface device, wherein the respective sensor data for a respective haptic interface device indicates respective positioning of respective sensors of the respective wearable haptic interface. A device may process, by the haptic interface module, the respective sensor data to generate respective relative location data for each respective haptic interface device, wherein the relative location data is relative to a reference location defined with respect to the corresponding wearable haptic interface. A device may receive, by the haptic interface module, tracked location data from one or more motion tracking sensors, wherein the tracked location data indicates respective locations of the one or more haptic interface devices relative to a spatial environment of the user. A device may generate, by the haptic interface module, a series of motion capture frames based on the tracked location data and the respective relative location data for each respective haptic interface device, wherein each respective motion capture frame indicates a set of locations and orientations for each respective haptic interface device at a given time. A device may generate, by the haptic interface module, a series of kinematic frames based on the series of motion capture frames and one or more mediation processes that collectively convert, for each of the motion capture frames, the set of locations and orientations of the one or more respective haptic interface devices into a set of intended locations and intended orientations for configuring the avatar in the computer-mediated environment. A device may output the series of kinematic frames to the application, wherein the kinematic frames are provided to the application as user input. A device may receive, by a haptic interface module that is configured to interact with one or more haptic interface devices and an application that generates a computer-mediated environment comprising an avatar corresponding to a user wearing the one or more haptic interface devices, respective sensor data for each respective haptic interface device, wherein the respective sensor data for a respective haptic interface device indicates respective positioning of respective sensors of the respective wearable haptic interface. A device may process, by the haptic interface module, the respective sensor data to generate respective relative location data for each respective haptic interface device, wherein the relative location data is relative to a reference location defined with respect to the corresponding wearable haptic interface. A device may receive, by the haptic interface module, tracked location data from one or more motion tracking sensors, wherein the tracked location data indicates respective locations of the one or more haptic interface devices relative to a spatial environment of the user. A device may generate, by the haptic interface module, a series of motion capture frames based on the tracked location data and the respective relative location data for each respective haptic interface device, wherein each respective motion capture frame indicates a set of locations and orientations for each respective haptic interface device at a given time. A device may generate, by the haptic interface module, a series of kinematic frames based on the series of motion capture frames and one or more mediation processes that collectively convert, for each of the motion capture frames, the set of locations and orientations of the one or more respective haptic interface devices into a set of intended locations and intended orientations for configuring the avatar in the computer-mediated environment. A device may output the series of kinematic frames to the application, wherein the kinematic frames are provided to the application as user input.
更多详情内容请
点击查看