跳转至

查看PyRealSense2包中的类和函数

由于想要在编写的代码中声明类型(方便在使用自己编写的函数时,查看返回的类型),于是开始查看pyrealsense2包中的代码,然后发现只有一个 __init__.py

pyrealsense2/__init__.py
# py libs (pyd/so) should be copied to pyrealsense2 folder
from .pyrealsense2 import *

__init__.py 路径下是 pyrealsense2.cp310-win_amd64.pyd 文件,就意识到这是一个使用c++编写的库(而python只是将其进行了封装),所以无法直接从源代码中查看库中的类和函数。

使用help函数进行查看

于是搜索这种情况要如何进行查看,

python解析.pyd文件_查看pyd文件里的类信息-CSDN博客

这篇文章提到可以使用python内置的 help 函数,

help(pyrealsense2)

或者

import pyrealsense2 as rs
help(rs)

得到如下的输出

Help on package pyrealsense2:

NAME
    pyrealsense2 - # py libs (pyd/so) should be copied to pyrealsense2 folder

PACKAGE CONTENTS
    _version
    pyrealsense2
    pyrsutils

FUNCTIONS
...

于是继续查看包中的 pyrealsense2

help(rs.pyrealsense2)

然后发现输出巨长

Help on module pyrealsense2.pyrealsense2 in pyrealsense2:

NAME
    pyrealsense2.pyrealsense2

DESCRIPTION
    LibrealsenseTM Python Bindings
    ==============================
-- More  --

往下翻了半天都没到底😅

搜索发现,回车Enter只能向下滚动一行,而空格Space能向下滚动一页

安装anaconda时如何跳到more的底部 - 知乎

help函数输出内容直接保存到txt中

于是搜索有没有办法能直接将 help 输出的内容写入到文件中,

如何导出Python内置help()函数的输出-腾讯云开发者社区-腾讯云

这篇帖子采纳的回答提到,可以编写这样的函数实现这个功能

import sys
import pydoc

def output_help_to_file(filepath, request):
    f = open(filepath, 'w')
    sys.stdout = f
    pydoc.help(request)
    f.close()
    sys.stdout = sys.__stdout__
    return

然后类似像这样调用这个函数即可

output_help_to_file(r'rs.txt', 'pyrealsense2')

于是就得到了关于pyrealsense2包中类和函数的说明

Help on module pyrealsense2.pyrealsense2 in pyrealsense2:

NAME
    pyrealsense2.pyrealsense2

DESCRIPTION
    LibrealsenseTM Python Bindings
    ==============================
    Library for accessing Intel RealSenseTM cameras

CLASSES
    pybind11_builtins.pybind11_object(builtins.object)
        BufData
        STAEControl
        STAFactor
        STCensusRadius
        STColorControl
        STColorCorrection
        STDepthControlGroup
        STDepthTableControl
        STHdad
        STRauColorThresholdsControl
        STRauSupportVectorControl
        STRsm
        STSloColorThresholdsControl
        STSloPenaltyControl
        calib_target_type
        calibration_status
        calibration_type
        camera_info
        config
        context
        debug_protocol
        device
            auto_calibrated_device
            calibration_change_device
            device_calibration
            firmware_logger
            playback
            recorder
            software_device
            updatable
            update_device
        device_list
        distortion
        event_information
        extension
        extrinsics
        filter_interface
        firmware_log_message
        firmware_log_parsed_message
        format
        frame
            composite_frame
            motion_frame
            points
            pose_frame
            video_frame
                depth_frame
                    disparity_frame
        frame_metadata_value
        frame_queue
        frame_source
        intrinsics
        l500_visual_preset
        log_message
        log_severity
        matchers
        motion_device_intrinsic
        motion_stream
        notification
        notification_category
        option
        option_range
        option_type
        option_value
        options
            processing_block
                filter(processing_block, filter_interface)
                    align
                    colorizer
                    decimation_filter
                    disparity_transform
                    hdr_merge
                    hole_filling_filter
                    pointcloud
                    save_single_frameset
                    save_to_ply
                    sequence_id_filter
                    spatial_filter
                    temporal_filter
                    threshold_filter
                    units_transform
                    yuy_decoder
            sensor
                color_sensor
                debug_stream_sensor
                depth_sensor
                    depth_stereo_sensor
                fisheye_sensor
                max_usable_range_sensor
                motion_sensor
                pose_sensor
                roi_sensor
                software_sensor
                wheel_odometer
        options_list
        pipeline
        pipeline_profile
        pipeline_wrapper
        playback_status
        pose
        pose_stream
        product_line
        quaternion
        region_of_interest
        rs400_advanced_mode
        rs400_visual_preset
        serializable_device
        software_motion_frame
        software_notification
        software_pose_frame
        software_video_frame
        stream
        stream_profile
            motion_stream_profile
            pose_stream_profile
            video_stream_profile
        syncer
        terminal_parser
        texture_coordinate
        timestamp_domain
        vector
        vertex
        video_stream

    class composite_frame(frame)
        |  Extends the frame class with additional frameset related attributes and functions
        |  
        |  Method resolution order:
        |      composite_frame
        |      frame
        |      pybind11_builtins.pybind11_object
        |      builtins.object
        |  
        |  Methods defined here:
        |
        |  ...
        |  
        |  get_color_frame(...)
        |      get_color_frame(self: pyrealsense2.pyrealsense2.composite_frame) -> pyrealsense2.pyrealsense2.video_frame
        |      
        |      Retrieve the first color frame, if no frame is found, search for the color frame from IR stream. If one still can't be found, return an empty frame instance.
        |  
        |  get_depth_frame(...)
        |      get_depth_frame(self: pyrealsense2.pyrealsense2.composite_frame) -> pyrealsense2.pyrealsense2.depth_frame
        |      
        |      Retrieve the first depth frame, if no frame is found, return an empty frame instance.
        |  
        |  ----------------------------------------------------------------------
        |  Methods inherited from frame:
        |  
        |  ...
        |  
        |  ----------------------------------------------------------------------
        |  Readonly properties inherited from frame:
        |  
        |  data
        |      Data from the frame handle. Identical to calling get_data.
        |  
        |  frame_number
        |      The frame number. Identical to calling get_frame_number.
        |  
        |  frame_timestamp_domain
        |      The timestamp domain. Identical to calling get_frame_timestamp_domain.
        |  
        |  profile
        |      Stream profile from frame handle. Identical to calling get_profile.
        |  
        |  timestamp
        |      Time at which the frame was captured. Identical to calling get_timestamp.
        |  
        |  ----------------------------------------------------------------------
        |  Static methods inherited from pybind11_builtins.pybind11_object:
        |  
        |  __new__(*args, **kwargs) from pybind11_builtins.pybind11_type
        |      Create and return a new object.  See help(type) for accurate signature.

    class config(pybind11_builtins.pybind11_object)
        |  The config allows pipeline users to request filters for the pipeline streams and device selection and configuration.
        |  This is an optional step in pipeline creation, as the pipeline resolves its streaming device internally.
        |  Config provides its users a way to set the filters and test if there is no conflict with the pipeline requirements from the device.
        |  It also allows the user to find a matching device for the config filters and the pipeline, in order to select a device explicitly, and modify its controls before streaming starts.
        |  
        |  Method resolution order:
        |      config
        |      pybind11_builtins.pybind11_object
        |      builtins.object
        |  
        |  Methods defined here:
        |  
        |  __init__(...)
        |      __init__(self: pyrealsense2.pyrealsense2.config) -> None
        |  
        |  can_resolve(...)
        |      can_resolve(self: pyrealsense2.pyrealsense2.config, p: pyrealsense2.pyrealsense2.pipeline_wrapper) -> bool
        |      
        |      Check if the config can resolve the configuration filters, to find a matching device and streams profiles. The resolution conditions are as described in resolve().
        |  
        |  disable_all_streams(...)
        |      disable_all_streams(self: pyrealsense2.pyrealsense2.config) -> None
        |      
        |      Disable all device stream explicitly, to remove any requests on the streams profiles.
        |      The streams can still be enabled due to pipeline computer vision module request. This call removes any filter on the streams configuration.
        |  
        |  disable_stream(...)
        |      disable_stream(self: pyrealsense2.pyrealsense2.config, stream: pyrealsense2.pyrealsense2.stream, index: int = -1) -> None
        |      
        |      Disable a device stream explicitly, to remove any requests on this stream profile.
        |      The stream can still be enabled due to pipeline computer vision module request. This call removes any filter on the stream configuration.
        |  
        |  enable_all_streams(...)
        |      enable_all_streams(self: pyrealsense2.pyrealsense2.config) -> None
        |      
        |      Enable all device streams explicitly.
        |      The conditions and behavior of this method are similar to those of enable_stream().
        |      This filter enables all raw streams of the selected device. The device is either selected explicitly by the application, or by the pipeline requirements or default. The list of streams is device dependent.
        |  
        |  enable_device(...)
        |      enable_device(self: pyrealsense2.pyrealsense2.config, serial: str) -> None
        |      
        |      Select a specific device explicitly by its serial number, to be used by the pipeline.
        |      The conditions and behavior of this method are similar to those of enable_stream().
        |      This method is required if the application needs to set device or sensor settings prior to pipeline streaming, to enforce the pipeline to use the configured device.
        |  
        |  enable_device_from_file(...)
        |      enable_device_from_file(self: pyrealsense2.pyrealsense2.config, file_name: str, repeat_playback: bool = True) -> None
        |      
        |      Select a recorded device from a file, to be used by the pipeline through playback.
        |      The device available streams are as recorded to the file, and resolve() considers only this device and configuration as available.
        |      This request cannot be used if enable_record_to_file() is called for the current config, and vice versa.
        |  
        |  enable_record_to_file(...)
        |      enable_record_to_file(self: pyrealsense2.pyrealsense2.config, file_name: str) -> None
        |      
        |      Requires that the resolved device would be recorded to file.
        |      This request cannot be used if enable_device_from_file() is called for the current config, and vice versa as available.
        |  
        |  enable_stream(...)
        |      enable_stream(*args, **kwargs)
        |      Overloaded function.
        |      
        |      1. enable_stream(self: pyrealsense2.pyrealsense2.config, stream_type: pyrealsense2.pyrealsense2.stream, stream_index: int, width: int, height: int, format: pyrealsense2.pyrealsense2.format, framerate: int) -> None
        |      
        |      Enable a device stream explicitly, with selected stream parameters.
        |      The method allows the application to request a stream with specific configuration.
        |      If no stream is explicitly enabled, the pipeline configures the device and its streams according to the attached computer vision modules and processing blocks requirements, or default configuration for the first available device.
        |      The application can configure any of the input stream parameters according to its requirement, or set to 0 for don't care value.
        |      The config accumulates the application calls for enable configuration methods, until the configuration is applied.
        |      Multiple enable stream calls for the same stream override each other, and the last call is maintained.
        |      Upon calling resolve(), the config checks for conflicts between the application configuration requests and the attached computer vision modules and processing blocks requirements, and fails if conflicts are found.
        |      Before resolve() is called, no conflict check is done.
        |      
        |      2. enable_stream(self: pyrealsense2.pyrealsense2.config, stream_type: pyrealsense2.pyrealsense2.stream) -> None
        |      
        |      Stream type only. Other parameters are resolved internally.
        |      
        |      3. enable_stream(self: pyrealsense2.pyrealsense2.config, stream_type: pyrealsense2.pyrealsense2.stream, stream_index: int) -> None
        |      
        |      Stream type and possibly also stream index. Other parameters are resolved internally.
        |      
        |      4. enable_stream(self: pyrealsense2.pyrealsense2.config, stream_type: pyrealsense2.pyrealsense2.stream, format: pyrealsense2.pyrealsense2.format, framerate: int) -> None
        |      
        |      Stream type and format, and possibly frame rate. Other parameters are resolved internally.
        |      
        |      5. enable_stream(self: pyrealsense2.pyrealsense2.config, stream_type: pyrealsense2.pyrealsense2.stream, width: int, height: int, format: pyrealsense2.pyrealsense2.format, framerate: int) -> None
        |      
        |      Stream type and resolution, and possibly format and frame rate. Other parameters are resolved internally.
        |      
        |      6. enable_stream(self: pyrealsense2.pyrealsense2.config, stream_type: pyrealsense2.pyrealsense2.stream, stream_index: int, format: pyrealsense2.pyrealsense2.format, framerate: int) -> None
        |      
        |      Stream type, index, and format, and possibly framerate. Other parameters are resolved internally.
        |  
        |  resolve(...)
        |      resolve(self: pyrealsense2.pyrealsense2.config, p: pyrealsense2.pyrealsense2.pipeline_wrapper) -> pyrealsense2.pyrealsense2.pipeline_profile
        |      
        |      Resolve the configuration filters, to find a matching device and streams profiles.
        |      The method resolves the user configuration filters for the device and streams, and combines them with the requirements of the computer vision modules and processing blocks attached to the pipeline. If there are no conflicts of requests, it looks for an available device, which can satisfy all requests, and selects the first matching streams configuration.
        |      In the absence of any request, the config object selects the first available device and the first color and depth streams configuration.The pipeline profile selection during start() follows the same method. Thus, the selected profile is the same, if no change occurs to the available devices.Resolving the pipeline configuration provides the application access to the pipeline selected device for advanced control.The returned configuration is not applied to the device, so the application doesn't own the device sensors. However, the application can call enable_device(), to enforce the device returned by this method is selected by pipeline start(), and configure the device and sensors options or extensions before streaming starts.
        |  
        |  ----------------------------------------------------------------------
        |  Static methods inherited from pybind11_builtins.pybind11_object:
        |  
        |  __new__(*args, **kwargs) from pybind11_builtins.pybind11_type
        |      Create and return a new object.  See help(type) for accurate signature.

    ...

    class depth_frame(video_frame)
        |  Extends the video_frame class with additional depth related attributes and functions.
        |  
        |  Method resolution order:
        |      depth_frame
        |      video_frame
        |      frame
        |      pybind11_builtins.pybind11_object
        |      builtins.object
        |  
        |  Methods defined here:
        |  
        |  __init__(...)
        |      __init__(self: pyrealsense2.pyrealsense2.depth_frame, arg0: pyrealsense2.pyrealsense2.frame) -> None
        |  
        |  get_distance(...)
        |      get_distance(self: pyrealsense2.pyrealsense2.depth_frame, x: int, y: int) -> float
        |      
        |      Provide the depth in meters at the given pixel
        |  
        |  get_units(...)
        |      get_units(self: pyrealsense2.pyrealsense2.depth_frame) -> float
        |      
        |      Provide the scaling factor to use when converting from get_data() units to meters
        |  
        |  ----------------------------------------------------------------------
        |  Methods inherited from video_frame:
        |  
        |  ...
        |  
        |  ----------------------------------------------------------------------
        |  Readonly properties inherited from video_frame:
        |  
        |  bits_per_pixel
        |      Bits per pixel. Identical to calling get_bits_per_pixel.
        |  
        |  bytes_per_pixel
        |      Bytes per pixel. Identical to calling get_bytes_per_pixel.
        |  
        |  height
        |      Image height in pixels. Identical to calling get_height.
        |  
        |  stride_in_bytes
        |      Frame stride, meaning the actual line width in memory in bytes (not the logical image width). Identical to calling get_stride_in_bytes.
        |  
        |  width
        |      Image width in pixels. Identical to calling get_width.
        |  
        |  ----------------------------------------------------------------------
        |  Methods inherited from frame:
        |  
        |  ...
        |  
        |  get_data(...)
        |      get_data(self: pyrealsense2.pyrealsense2.frame) -> pyrealsense2.pyrealsense2.BufData
        |      
        |      Retrieve data from the frame handle.
        |  
        |  ...
        |  
        |  ----------------------------------------------------------------------
        |  Readonly properties inherited from frame:
        |  
        |  data
        |      Data from the frame handle. Identical to calling get_data.
        |  
        |  frame_number
        |      The frame number. Identical to calling get_frame_number.
        |  
        |  frame_timestamp_domain
        |      The timestamp domain. Identical to calling get_frame_timestamp_domain.
        |  
        |  profile
        |      Stream profile from frame handle. Identical to calling get_profile.
        |  
        |  timestamp
        |      Time at which the frame was captured. Identical to calling get_timestamp.
        |  
        |  ----------------------------------------------------------------------
        |  Static methods inherited from pybind11_builtins.pybind11_object:
        |  
        |  __new__(*args, **kwargs) from pybind11_builtins.pybind11_type
        |      Create and return a new object.  See help(type) for accurate signature.

    ...

    class format(pybind11_builtins.pybind11_object)
        |  Method resolution order:
        |      format
        |      pybind11_builtins.pybind11_object
        |      builtins.object
        |  
        |  Methods defined here:
        |  
        |  ...
        |  
        |  ----------------------------------------------------------------------
        |  Readonly properties defined here:
        |  
        |  __members__
        |  
        |  name
        |      name(self: handle) -> str
        |  
        |  value
        |  
        |  ----------------------------------------------------------------------
        |  Data and other attributes defined here:
        |  
        |  any = <format.any: 0>
        |  
        |  bgr8 = <format.bgr8: 6>
        |  
        |  bgra8 = <format.bgra8: 8>
        |  
        |  combined_motion = <format.combined_motion: 33>
        |  
        |  disparity16 = <format.disparity16: 2>
        |  
        |  disparity32 = <format.disparity32: 19>
        |  
        |  distance = <format.distance: 21>
        |  
        |  fg = <format.fg: 29>
        |  
        |  gpio_raw = <format.gpio_raw: 17>
        |  
        |  invi = <format.invi: 26>
        |  
        |  inzi = <format.inzi: 25>
        |  
        |  m420 = <format.m420: 32>
        |  
        |  mjpeg = <format.mjpeg: 22>
        |  
        |  motion_raw = <format.motion_raw: 15>
        |  
        |  motion_xyz32f = <format.motion_xyz32f: 16>
        |  
        |  raw10 = <format.raw10: 11>
        |  
        |  raw16 = <format.raw16: 12>
        |  
        |  raw8 = <format.raw8: 13>
        |  
        |  rgb8 = <format.rgb8: 5>
        |  
        |  rgba8 = <format.rgba8: 7>
        |  
        |  six_dof = <format.six_dof: 18>
        |  
        |  uyvy = <format.uyvy: 14>
        |  
        |  w10 = <format.w10: 27>
        |  
        |  xyz32f = <format.xyz32f: 3>
        |  
        |  y10bpack = <format.y10bpack: 20>
        |  
        |  y12i = <format.y12i: 24>
        |  
        |  y16 = <format.y16: 10>
        |  
        |  y16i = <format.y16i: 31>
        |  
        |  y411 = <format.y411: 30>
        |  
        |  y8 = <format.y8: 9>
        |  
        |  y8i = <format.y8i: 23>
        |  
        |  yuyv = <format.yuyv: 4>
        |  
        |  z16 = <format.z16: 1>
        |  
        |  z16h = <format.z16h: 28>
        |  
        |  ----------------------------------------------------------------------
        |  Static methods inherited from pybind11_builtins.pybind11_object:
        |  
        |  __new__(*args, **kwargs) from pybind11_builtins.pybind11_type
        |      Create and return a new object.  See help(type) for accurate signature.

    ...

    class pipeline(pybind11_builtins.pybind11_object)
        |  The pipeline simplifies the user interaction with the device and computer vision processing modules.
        |  The class abstracts the camera configuration and streaming, and the vision modules triggering and threading.
        |  It lets the application focus on the computer vision output of the modules, or the device output data.
        |  The pipeline can manage computer vision modules, which are implemented as a processing blocks.
        |  The pipeline is the consumer of the processing block interface, while the application consumes the computer vision interface.
        |  
        |  Method resolution order:
        |      pipeline
        |      pybind11_builtins.pybind11_object
        |      builtins.object
        |  
        |  Methods defined here:
        |  
        |  __init__(...)
        |      __init__(*args, **kwargs)
        |      Overloaded function.
        |      
        |      1. __init__(self: pyrealsense2.pyrealsense2.pipeline) -> None
        |      
        |      With a default context
        |      
        |      2. __init__(self: pyrealsense2.pyrealsense2.pipeline, ctx: pyrealsense2.pyrealsense2.context) -> None
        |      
        |      The caller can provide a context created by the application, usually for playback or testing purposes
        |  
        |  get_active_profile(...)
        |      get_active_profile(self: pyrealsense2.pyrealsense2.pipeline) -> pyrealsense2.pyrealsense2.pipeline_profile
        |  
        |  poll_for_frames(...)
        |      poll_for_frames(self: pyrealsense2.pyrealsense2.pipeline) -> pyrealsense2.pyrealsense2.composite_frame
        |      
        |      Check if a new set of frames is available and retrieve the latest undelivered set.
        |      The frames set includes time-synchronized frames of each enabled stream in the pipeline.
        |      The method returns without blocking the calling thread, with status of new frames available or not.
        |      If available, it fetches the latest frames set.
        |      Device frames, which were produced while the function wasn't called, are dropped.
        |      To avoid frame drops, this method should be called as fast as the device frame rate.
        |      The application can maintain the frames handles to defer processing. However, if the application maintains too long history, the device may lack memory resources to produce new frames, and the following calls to this method shall return no new frames, until resources become available.
        |  
        |  start(...)
        |      start(*args, **kwargs)
        |      Overloaded function.
        |      
        |      1. start(self: pyrealsense2.pyrealsense2.pipeline) -> pyrealsense2.pyrealsense2.pipeline_profile
        |      
        |      Start the pipeline streaming with its default configuration.
        |      The pipeline streaming loop captures samples from the device, and delivers them to the attached computer vision modules and processing blocks, according to each module requirements and threading model.
        |      During the loop execution, the application can access the camera streams by calling wait_for_frames() or poll_for_frames().
        |      The streaming loop runs until the pipeline is stopped.
        |      Starting the pipeline is possible only when it is not started. If the pipeline was started, an exception is raised.
        |      
        |      
        |      2. start(self: pyrealsense2.pyrealsense2.pipeline, config: pyrealsense2.pyrealsense2.config) -> pyrealsense2.pyrealsense2.pipeline_profile
        |      
        |      Start the pipeline streaming according to the configuraion.
        |      The pipeline streaming loop captures samples from the device, and delivers them to the attached computer vision modules and processing blocks, according to each module requirements and threading model.
        |      During the loop execution, the application can access the camera streams by calling wait_for_frames() or poll_for_frames().
        |      The streaming loop runs until the pipeline is stopped.
        |      Starting the pipeline is possible only when it is not started. If the pipeline was started, an exception is raised.
        |      The pipeline selects and activates the device upon start, according to configuration or a default configuration.
        |      When the rs2::config is provided to the method, the pipeline tries to activate the config resolve() result.
        |      If the application requests are conflicting with pipeline computer vision modules or no matching device is available on the platform, the method fails.
        |      Available configurations and devices may change between config resolve() call and pipeline start, in case devices are connected or disconnected, or another application acquires ownership of a device.
        |      
        |      3. start(self: pyrealsense2.pyrealsense2.pipeline, callback: Callable[[pyrealsense2.pyrealsense2.frame], None]) -> pyrealsense2.pyrealsense2.pipeline_profile
        |      
        |      Start the pipeline streaming with its default configuration.
        |      The pipeline captures samples from the device, and delivers them to the provided frame callback.
        |      Starting the pipeline is possible only when it is not started. If the pipeline was started, an exception is raised.
        |      When starting the pipeline with a callback both wait_for_frames() and poll_for_frames() will throw exception.
        |      
        |      4. start(self: pyrealsense2.pyrealsense2.pipeline, config: pyrealsense2.pyrealsense2.config, callback: Callable[[pyrealsense2.pyrealsense2.frame], None]) -> pyrealsense2.pyrealsense2.pipeline_profile
        |      
        |      Start the pipeline streaming according to the configuraion.
        |      The pipeline captures samples from the device, and delivers them to the provided frame callback.
        |      Starting the pipeline is possible only when it is not started. If the pipeline was started, an exception is raised.
        |      When starting the pipeline with a callback both wait_for_frames() and poll_for_frames() will throw exception.
        |      The pipeline selects and activates the device upon start, according to configuration or a default configuration.
        |      When the rs2::config is provided to the method, the pipeline tries to activate the config resolve() result.
        |      If the application requests are conflicting with pipeline computer vision modules or no matching device is available on the platform, the method fails.
        |      Available configurations and devices may change between config resolve() call and pipeline start, in case devices are connected or disconnected, or another application acquires ownership of a device.
        |      
        |      5. start(self: pyrealsense2.pyrealsense2.pipeline, queue: pyrealsense2.pyrealsense2.frame_queue) -> pyrealsense2.pyrealsense2.pipeline_profile
        |      
        |      Start the pipeline streaming with its default configuration.
        |      The pipeline captures samples from the device, and delivers them to the provided frame queue.
        |      Starting the pipeline is possible only when it is not started. If the pipeline was started, an exception is raised.
        |      When starting the pipeline with a callback both wait_for_frames() and poll_for_frames() will throw exception.
        |      
        |      6. start(self: pyrealsense2.pyrealsense2.pipeline, config: pyrealsense2.pyrealsense2.config, queue: pyrealsense2.pyrealsense2.frame_queue) -> pyrealsense2.pyrealsense2.pipeline_profile
        |      
        |      Start the pipeline streaming according to the configuraion.
        |      The pipeline captures samples from the device, and delivers them to the provided frame queue.
        |      Starting the pipeline is possible only when it is not started. If the pipeline was started, an exception is raised.
        |      When starting the pipeline with a callback both wait_for_frames() and poll_for_frames() will throw exception.
        |      The pipeline selects and activates the device upon start, according to configuration or a default configuration.
        |      When the rs2::config is provided to the method, the pipeline tries to activate the config resolve() result.
        |      If the application requests are conflicting with pipeline computer vision modules or no matching device is available on the platform, the method fails.
        |      Available configurations and devices may change between config resolve() call and pipeline start, in case devices are connected or disconnected, or another application acquires ownership of a device.
        |  
        |  stop(...)
        |      stop(self: pyrealsense2.pyrealsense2.pipeline) -> None
        |      
        |      Stop the pipeline streaming.
        |      The pipeline stops delivering samples to the attached computer vision modules and processing blocks, stops the device streaming and releases the device resources used by the pipeline. It is the application's responsibility to release any frame reference it owns.
        |      The method takes effect only after start() was called, otherwise an exception is raised.
        |  
        |  try_wait_for_frames(...)
        |      try_wait_for_frames(self: pyrealsense2.pyrealsense2.pipeline, timeout_ms: int = 5000) -> Tuple[bool, pyrealsense2.pyrealsense2.composite_frame]
        |  
        |  wait_for_frames(...)
        |      wait_for_frames(self: pyrealsense2.pyrealsense2.pipeline, timeout_ms: int = 5000) -> pyrealsense2.pyrealsense2.composite_frame
        |      
        |      Wait until a new set of frames becomes available.
        |      The frames set includes time-synchronized frames of each enabled stream in the pipeline.
        |      In case of different frame rates of the streams, the frames set include a matching frame of the slow stream, which may have been included in previous frames set.
        |      The method blocks the calling thread, and fetches the latest unread frames set.
        |      Device frames, which were produced while the function wasn't called, are dropped. To avoid frame drops, this method should be called as fast as the device frame rate.
        |      The application can maintain the frames handles to defer processing. However, if the application maintains too long history, the device may lack memory resources to produce new frames, and the following call to this method shall fail to retrieve new frames, until resources become available.
        |  
        |  ----------------------------------------------------------------------
        |  Static methods inherited from pybind11_builtins.pybind11_object:
        |  
        |  __new__(*args, **kwargs) from pybind11_builtins.pybind11_type
        |      Create and return a new object.  See help(type) for accurate signature.

    ...

    class stream(pybind11_builtins.pybind11_object)
        |  Method resolution order:
        |      stream
        |      pybind11_builtins.pybind11_object
        |      builtins.object
        |  
        |  Methods defined here:
        |  
        |  ...
        |  
        |  ----------------------------------------------------------------------
        |  Readonly properties defined here:
        |  
        |  __members__
        |  
        |  name
        |      name(self: handle) -> str
        |  
        |  value
        |  
        |  ----------------------------------------------------------------------
        |  Data and other attributes defined here:
        |  
        |  accel = <stream.accel: 6>
        |  
        |  any = <stream.any: 0>
        |  
        |  color = <stream.color: 2>
        |  
        |  confidence = <stream.confidence: 9>
        |  
        |  depth = <stream.depth: 1>
        |  
        |  fisheye = <stream.fisheye: 4>
        |  
        |  gpio = <stream.gpio: 7>
        |  
        |  gyro = <stream.gyro: 5>
        |  
        |  infrared = <stream.infrared: 3>
        |  
        |  motion = <stream.motion: 10>
        |  
        |  pose = <stream.pose: 8>
        |  
        |  ----------------------------------------------------------------------
        |  Static methods inherited from pybind11_builtins.pybind11_object:
        |  
        |  __new__(*args, **kwargs) from pybind11_builtins.pybind11_type
        |      Create and return a new object.  See help(type) for accurate signature.

    ...

    class video_frame(frame)
        |  Extends the frame class with additional video related attributes and functions.
        |  
        |  Method resolution order:
        |      video_frame
        |      frame
        |      pybind11_builtins.pybind11_object
        |      builtins.object
        |  
        |  Methods defined here:
        |  
        |  __init__(...)
        |      __init__(self: pyrealsense2.pyrealsense2.video_frame, arg0: pyrealsense2.pyrealsense2.frame) -> None
        |  
        |  extract_target_dimensions(...)
        |      extract_target_dimensions(self: pyrealsense2.pyrealsense2.video_frame, arg0: pyrealsense2.pyrealsense2.calib_target_type) -> List[float]
        |      
        |      This will calculate the four target dimenson size(s) in millimeter on the specific target.
        |  
        |  get_bits_per_pixel(...)
        |      get_bits_per_pixel(self: pyrealsense2.pyrealsense2.video_frame) -> int
        |      
        |      Retrieve bits per pixel.
        |  
        |  get_bytes_per_pixel(...)
        |      get_bytes_per_pixel(self: pyrealsense2.pyrealsense2.video_frame) -> int
        |      
        |      Retrieve bytes per pixel.
        |  
        |  get_height(...)
        |      get_height(self: pyrealsense2.pyrealsense2.video_frame) -> int
        |      
        |      Returns image height in pixels.
        |  
        |  get_stride_in_bytes(...)
        |      get_stride_in_bytes(self: pyrealsense2.pyrealsense2.video_frame) -> int
        |      
        |      Retrieve frame stride, meaning the actual line width in memory in bytes (not the logical image width).
        |  
        |  get_width(...)
        |      get_width(self: pyrealsense2.pyrealsense2.video_frame) -> int
        |      
        |      Returns image width in pixels.
        |  
        |  ----------------------------------------------------------------------
        |  Readonly properties defined here:
        |  
        |  bits_per_pixel
        |      Bits per pixel. Identical to calling get_bits_per_pixel.
        |  
        |  bytes_per_pixel
        |      Bytes per pixel. Identical to calling get_bytes_per_pixel.
        |  
        |  height
        |      Image height in pixels. Identical to calling get_height.
        |  
        |  stride_in_bytes
        |      Frame stride, meaning the actual line width in memory in bytes (not the logical image width). Identical to calling get_stride_in_bytes.
        |  
        |  width
        |      Image width in pixels. Identical to calling get_width.
        |  
        |  ----------------------------------------------------------------------
        |  Methods inherited from frame:
        |  
        |  ...
        |  
        |  get_data(...)
        |      get_data(self: pyrealsense2.pyrealsense2.frame) -> pyrealsense2.pyrealsense2.BufData
        |      
        |      Retrieve data from the frame handle.
        |  
        |  ...
        |  
        |  ----------------------------------------------------------------------
        |  Readonly properties inherited from frame:
        |  
        |  data
        |      Data from the frame handle. Identical to calling get_data.
        |  
        |  frame_number
        |      The frame number. Identical to calling get_frame_number.
        |  
        |  frame_timestamp_domain
        |      The timestamp domain. Identical to calling get_frame_timestamp_domain.
        |  
        |  profile
        |      Stream profile from frame handle. Identical to calling get_profile.
        |  
        |  timestamp
        |      Time at which the frame was captured. Identical to calling get_timestamp.
        |  
        |  ----------------------------------------------------------------------
        |  Static methods inherited from pybind11_builtins.pybind11_object:
        |  
        |  __new__(*args, **kwargs) from pybind11_builtins.pybind11_type
        |      Create and return a new object.  See help(type) for accurate signature.

    ...

FUNCTIONS
    ...

DATA
    D400 = <product_line.D400: 2>
    D500 = <product_line.D500: 32>
    DEPTH = <product_line.DEPTH: 46>
    L500 = <product_line.L500: 8>
    SR300 = <product_line.SR300: 4>
    T200 = <product_line.T200: 16>
    TRACKING = <product_line.T200: 16>
    __full_version__ = '2.55.1.6486'
    any = <product_line.any: 255>
    any_intel = <product_line.any_intel: 254>
    non_intel = <product_line.non_intel: 1>
    sw_only = <product_line.sw_only: 256>

VERSION
    2.55.1

FILE
    c:\users\ronald\appdata\local\programs\python\python310\lib\site-packages\pyrealsense2\pyrealsense2.cp310-win_amd64.pyd