openpose

2025-12-10 0 455

Build Type Linux MacOS Windows
Build Status

OpenPose has represented the first real-time multi-person system to jointly detect human body, hand, facial, and foot keypoints (in total 135 keypoints) on single images.

It is authored by Ginés Hidalgo, Zhe Cao, Tomas Simon, Shih-En Wei, Yaadhav Raaj, Hanbyul Joo, and Yaser Sheikh. It is maintained by Ginés Hidalgo and Yaadhav Raaj. OpenPose would not be possible without the CMU Panoptic Studio dataset. We would also like to thank all the people who have helped OpenPose in any way.

Authors Ginés Hidalgo (left) and Hanbyul Joo (right) in front of the CMU Panoptic Studio

Contents

  1. Results
  2. Features
  3. Related Work
  4. Installation
  5. Quick Start Overview
  6. Send Us Feedback!
  7. Citation
  8. License

Results

Whole-body (Body, Foot, Face, and Hands) 2D Pose Estimation

Testing OpenPose: (Left) Crazy Uptown Funk flashmob in Sydney video sequence. (Center and right) Authors Ginés Hidalgo and Tomas Simon testing face and hands

Whole-body 3D Pose Reconstruction and Estimation

Tianyi Zhao testing the OpenPose 3D Module

Unity Plugin

Tianyi Zhao and Ginés Hidalgo testing the OpenPose Unity Plugin

Runtime Analysis

We show an inference time comparison between the 3 available pose estimation libraries (same hardware and conditions): OpenPose, Alpha-Pose (fast Pytorch version), and Mask R-CNN. The OpenPose runtime is constant, while the runtime of Alpha-Pose and Mask R-CNN grow linearly with the number of people. More details here.

Features

Main Functionality:

  • 2D real-time multi-person keypoint detection:
    • 15, 18 or 25-keypoint body/foot keypoint estimation, including 6 foot keypoints. Runtime invariant to number of detected people.
    • 2×21-keypoint hand keypoint estimation. Runtime depends on number of detected people. See OpenPose Training for a runtime invariant alternative.
    • 70-keypoint face keypoint estimation. Runtime depends on number of detected people. See OpenPose Training for a runtime invariant alternative.
  • 3D real-time single-person keypoint detection:
    • 3D triangulation from multiple single views.
    • Synchronization of Flir cameras handled.
    • Compatible with Flir/Point Grey cameras.
  • Calibration toolbox: Estimation of distortion, intrinsic, and extrinsic camera parameters.
  • Single-person tracking for further speedup or visual smoothing.

Input: Image, video, webcam, Flir/Point Grey, IP camera, and support to add your own custom input source (e.g., depth camera).

Output: Basic image + keypoint display/saving (PNG, JPG, AVI, …), keypoint saving (JSON, XML, YML, …), keypoints as array class, and support to add your own custom output code (e.g., some fancy UI).

OS: Ubuntu (20, 18, 16, 14), Windows (10, 8), Mac OSX, Nvidia TX2.

Hardware compatibility: CUDA (Nvidia GPU), OpenCL (AMD GPU), and non-GPU (CPU-only) versions.

Usage Alternatives:

  • Command-line demo for built-in functionality.
  • C++ API and Python API for custom functionality. E.g., adding your custom inputs, pre-processing, post-posprocessing, and output steps.

For further details, check the major released features and release notes docs.

Related Work

  • OpenPose training code
  • OpenPose foot dataset
  • OpenPose Unity Plugin
  • OpenPose papers published in IEEE TPAMI and CVPR. Cite them in your publications if OpenPose helps your research! (Links and more details in the Citation section below).

Installation

If you want to use OpenPose without installing or writing any code, simply download and use the latest Windows portable version of OpenPose!

Otherwise, you could build OpenPose from source. See the installation doc for all the alternatives.

Quick Start Overview

Simply use the OpenPose Demo from your favorite command-line tool (e.g., Windows PowerShell or Ubuntu Terminal). E.g., this example runs OpenPose on your webcam and displays the body keypoints:

# Ubuntu
./build/examples/openpose/openpose.bin
:: Windows - Portable Demo
bin\\OpenPoseDemo.exe --video examples\\media\\video.avi

You can also add any of the available flags in any order. E.g., the following example runs on a video (--video {PATH}), enables face (--face) and hands (--hand), and saves the output keypoints on JSON files on disk (--write_json {PATH}).

# Ubuntu
./build/examples/openpose/openpose.bin --video examples/media/video.avi --face --hand --write_json output_json_folder/
:: Windows - Portable Demo
bin\\OpenPoseDemo.exe --video examples\\media\\video.avi --face --hand --write_json output_json_folder/

Optionally, you can also extend OpenPose\’s functionality from its Python and C++ APIs. After installing OpenPose, check its official doc for a quick overview of all the alternatives and tutorials.

Send Us Feedback!

Our library is open source for research purposes, and we want to improve it! So let us know (create a new GitHub issue or pull request, email us, etc.) if you…

  1. Find/fix any bug (in functionality or speed) or know how to speed up or improve any part of OpenPose.
  2. Want to add/show some cool functionality/demo/project made on top of OpenPose. We can add your project link to our Community-based Projects section or even integrate it with OpenPose!

Citation

Please cite these papers in your publications if OpenPose helps your research. All of OpenPose is based on OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields, while the hand and face detectors also use Hand Keypoint Detection in Single Images using Multiview Bootstrapping (the face detector was trained using the same procedure as the hand detector).

@article{8765346,
  author = {Z. {Cao} and G. {Hidalgo Martinez} and T. {Simon} and S. {Wei} and Y. A. {Sheikh}},
  journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
  title = {OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
  year = {2019}
}

@inproceedings{simon2017hand,
  author = {Tomas Simon and Hanbyul Joo and Iain Matthews and Yaser Sheikh},
  booktitle = {CVPR},
  title = {Hand Keypoint Detection in Single Images using Multiview Bootstrapping},
  year = {2017}
}

@inproceedings{cao2017realtime,
  author = {Zhe Cao and Tomas Simon and Shih-En Wei and Yaser Sheikh},
  booktitle = {CVPR},
  title = {Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
  year = {2017}
}

@inproceedings{wei2016cpm,
  author = {Shih-En Wei and Varun Ramakrishna and Takeo Kanade and Yaser Sheikh},
  booktitle = {CVPR},
  title = {Convolutional pose machines},
  year = {2016}
}

Paper links:

  • OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields:
    • IEEE TPAMI
    • ArXiv
  • Hand Keypoint Detection in Single Images using Multiview Bootstrapping
  • Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields
  • Convolutional Pose Machines

License

OpenPose is freely available for free non-commercial use, and may be redistributed under these conditions. Please, see the license for further details. Interested in a commercial license? Check this FlintBox link. For commercial queries, use the Contact section from the FlintBox link and also send a copy of that message to Yaser Sheikh.

下载源码

通过命令行克隆项目:

git clone https://github.com/CMU-Perceptual-Computing-Lab/openpose.git

收藏 (0) 打赏

感谢您的支持,我会继续努力的!

打开微信/支付宝扫一扫,即可进行扫码打赏哦,分享从这里开始,精彩与您同在
点赞 (0)

申明:本文由第三方发布,内容仅代表作者观点,与本网站无关。对本文以及其中全部或者部分内容的真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。本网发布或转载文章出于传递更多信息之目的,并不意味着赞同其观点或证实其描述,也不代表本网对其真实性负责。

左子网 编程相关 openpose https://www.zuozi.net/33616.html

carbon lang
上一篇: carbon lang
dragonfly
下一篇: dragonfly
常见问题
  • 1、自动:拍下后,点击(下载)链接即可下载;2、手动:拍下后,联系卖家发放即可或者联系官方找开发者发货。
查看详情
  • 1、源码默认交易周期:手动发货商品为1-3天,并且用户付款金额将会进入平台担保直到交易完成或者3-7天即可发放,如遇纠纷无限期延长收款金额直至纠纷解决或者退款!;
查看详情
  • 1、描述:源码描述(含标题)与实际源码不一致的(例:货不对板); 2、演示:有演示站时,与实际源码小于95%一致的(但描述中有”不保证完全一样、有变化的可能性”类似显著声明的除外); 3、发货:不发货可无理由退款; 4、安装:免费提供安装服务的源码但卖家不履行的; 5、收费:价格虚标,额外收取其他费用的(但描述中有显著声明或双方交易前有商定的除外); 6、其他:如质量方面的硬性常规问题BUG等。 注:经核实符合上述任一,均支持退款,但卖家予以积极解决问题则除外。
查看详情
  • 1、左子会对双方交易的过程及交易商品的快照进行永久存档,以确保交易的真实、有效、安全! 2、左子无法对如“永久包更新”、“永久技术支持”等类似交易之后的商家承诺做担保,请买家自行鉴别; 3、在源码同时有网站演示与图片演示,且站演与图演不一致时,默认按图演作为纠纷评判依据(特别声明或有商定除外); 4、在没有”无任何正当退款依据”的前提下,商品写有”一旦售出,概不支持退款”等类似的声明,视为无效声明; 5、在未拍下前,双方在QQ上所商定的交易内容,亦可成为纠纷评判依据(商定与描述冲突时,商定为准); 6、因聊天记录可作为纠纷评判依据,故双方联系时,只与对方在左子上所留的QQ、手机号沟通,以防对方不承认自我承诺。 7、虽然交易产生纠纷的几率很小,但一定要保留如聊天记录、手机短信等这样的重要信息,以防产生纠纷时便于左子介入快速处理。
查看详情

相关文章

猜你喜欢
发表评论
暂无评论
官方客服团队

为您解决烦忧 - 24小时在线 专业服务