imx.rst 24 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617
  1. i.MX Video Capture Driver
  2. =========================
  3. Introduction
  4. ------------
  5. The Freescale i.MX5/6 contains an Image Processing Unit (IPU), which
  6. handles the flow of image frames to and from capture devices and
  7. display devices.
  8. For image capture, the IPU contains the following internal subunits:
  9. - Image DMA Controller (IDMAC)
  10. - Camera Serial Interface (CSI)
  11. - Image Converter (IC)
  12. - Sensor Multi-FIFO Controller (SMFC)
  13. - Image Rotator (IRT)
  14. - Video De-Interlacing or Combining Block (VDIC)
  15. The IDMAC is the DMA controller for transfer of image frames to and from
  16. memory. Various dedicated DMA channels exist for both video capture and
  17. display paths. During transfer, the IDMAC is also capable of vertical
  18. image flip, 8x8 block transfer (see IRT description), pixel component
  19. re-ordering (for example UYVY to YUYV) within the same colorspace, and
  20. even packed <--> planar conversion. It can also perform a simple
  21. de-interlacing by interleaving even and odd lines during transfer
  22. (without motion compensation which requires the VDIC).
  23. The CSI is the backend capture unit that interfaces directly with
  24. camera sensors over Parallel, BT.656/1120, and MIPI CSI-2 busses.
  25. The IC handles color-space conversion, resizing (downscaling and
  26. upscaling), horizontal flip, and 90/270 degree rotation operations.
  27. There are three independent "tasks" within the IC that can carry out
  28. conversions concurrently: pre-process encoding, pre-process viewfinder,
  29. and post-processing. Within each task, conversions are split into three
  30. sections: downsizing section, main section (upsizing, flip, colorspace
  31. conversion, and graphics plane combining), and rotation section.
  32. The IPU time-shares the IC task operations. The time-slice granularity
  33. is one burst of eight pixels in the downsizing section, one image line
  34. in the main processing section, one image frame in the rotation section.
  35. The SMFC is composed of four independent FIFOs that each can transfer
  36. captured frames from sensors directly to memory concurrently via four
  37. IDMAC channels.
  38. The IRT carries out 90 and 270 degree image rotation operations. The
  39. rotation operation is carried out on 8x8 pixel blocks at a time. This
  40. operation is supported by the IDMAC which handles the 8x8 block transfer
  41. along with block reordering, in coordination with vertical flip.
  42. The VDIC handles the conversion of interlaced video to progressive, with
  43. support for different motion compensation modes (low, medium, and high
  44. motion). The deinterlaced output frames from the VDIC can be sent to the
  45. IC pre-process viewfinder task for further conversions. The VDIC also
  46. contains a Combiner that combines two image planes, with alpha blending
  47. and color keying.
  48. In addition to the IPU internal subunits, there are also two units
  49. outside the IPU that are also involved in video capture on i.MX:
  50. - MIPI CSI-2 Receiver for camera sensors with the MIPI CSI-2 bus
  51. interface. This is a Synopsys DesignWare core.
  52. - Two video multiplexers for selecting among multiple sensor inputs
  53. to send to a CSI.
  54. For more info, refer to the latest versions of the i.MX5/6 reference
  55. manuals [#f1]_ and [#f2]_.
  56. Features
  57. --------
  58. Some of the features of this driver include:
  59. - Many different pipelines can be configured via media controller API,
  60. that correspond to the hardware video capture pipelines supported in
  61. the i.MX.
  62. - Supports parallel, BT.565, and MIPI CSI-2 interfaces.
  63. - Concurrent independent streams, by configuring pipelines to multiple
  64. video capture interfaces using independent entities.
  65. - Scaling, color-space conversion, horizontal and vertical flip, and
  66. image rotation via IC task subdevs.
  67. - Many pixel formats supported (RGB, packed and planar YUV, partial
  68. planar YUV).
  69. - The VDIC subdev supports motion compensated de-interlacing, with three
  70. motion compensation modes: low, medium, and high motion. Pipelines are
  71. defined that allow sending frames to the VDIC subdev directly from the
  72. CSI. There is also support in the future for sending frames to the
  73. VDIC from memory buffers via a output/mem2mem devices.
  74. - Includes a Frame Interval Monitor (FIM) that can correct vertical sync
  75. problems with the ADV718x video decoders.
  76. Entities
  77. --------
  78. imx6-mipi-csi2
  79. --------------
  80. This is the MIPI CSI-2 receiver entity. It has one sink pad to receive
  81. the MIPI CSI-2 stream (usually from a MIPI CSI-2 camera sensor). It has
  82. four source pads, corresponding to the four MIPI CSI-2 demuxed virtual
  83. channel outputs. Multiple source pads can be enabled to independently
  84. stream from multiple virtual channels.
  85. This entity actually consists of two sub-blocks. One is the MIPI CSI-2
  86. core. This is a Synopsys Designware MIPI CSI-2 core. The other sub-block
  87. is a "CSI-2 to IPU gasket". The gasket acts as a demultiplexer of the
  88. four virtual channels streams, providing four separate parallel buses
  89. containing each virtual channel that are routed to CSIs or video
  90. multiplexers as described below.
  91. On i.MX6 solo/dual-lite, all four virtual channel buses are routed to
  92. two video multiplexers. Both CSI0 and CSI1 can receive any virtual
  93. channel, as selected by the video multiplexers.
  94. On i.MX6 Quad, virtual channel 0 is routed to IPU1-CSI0 (after selected
  95. by a video mux), virtual channels 1 and 2 are hard-wired to IPU1-CSI1
  96. and IPU2-CSI0, respectively, and virtual channel 3 is routed to
  97. IPU2-CSI1 (again selected by a video mux).
  98. ipuX_csiY_mux
  99. -------------
  100. These are the video multiplexers. They have two or more sink pads to
  101. select from either camera sensors with a parallel interface, or from
  102. MIPI CSI-2 virtual channels from imx6-mipi-csi2 entity. They have a
  103. single source pad that routes to a CSI (ipuX_csiY entities).
  104. On i.MX6 solo/dual-lite, there are two video mux entities. One sits
  105. in front of IPU1-CSI0 to select between a parallel sensor and any of
  106. the four MIPI CSI-2 virtual channels (a total of five sink pads). The
  107. other mux sits in front of IPU1-CSI1, and again has five sink pads to
  108. select between a parallel sensor and any of the four MIPI CSI-2 virtual
  109. channels.
  110. On i.MX6 Quad, there are two video mux entities. One sits in front of
  111. IPU1-CSI0 to select between a parallel sensor and MIPI CSI-2 virtual
  112. channel 0 (two sink pads). The other mux sits in front of IPU2-CSI1 to
  113. select between a parallel sensor and MIPI CSI-2 virtual channel 3 (two
  114. sink pads).
  115. ipuX_csiY
  116. ---------
  117. These are the CSI entities. They have a single sink pad receiving from
  118. either a video mux or from a MIPI CSI-2 virtual channel as described
  119. above.
  120. This entity has two source pads. The first source pad can link directly
  121. to the ipuX_vdic entity or the ipuX_ic_prp entity, using hardware links
  122. that require no IDMAC memory buffer transfer.
  123. When the direct source pad is routed to the ipuX_ic_prp entity, frames
  124. from the CSI can be processed by one or both of the IC pre-processing
  125. tasks.
  126. When the direct source pad is routed to the ipuX_vdic entity, the VDIC
  127. will carry out motion-compensated de-interlace using "high motion" mode
  128. (see description of ipuX_vdic entity).
  129. The second source pad sends video frames directly to memory buffers
  130. via the SMFC and an IDMAC channel, bypassing IC pre-processing. This
  131. source pad is routed to a capture device node, with a node name of the
  132. format "ipuX_csiY capture".
  133. Note that since the IDMAC source pad makes use of an IDMAC channel, it
  134. can do pixel reordering within the same colorspace. For example, the
  135. sink pad can take UYVY2X8, but the IDMAC source pad can output YUYV2X8.
  136. If the sink pad is receiving YUV, the output at the capture device can
  137. also be converted to a planar YUV format such as YUV420.
  138. It will also perform simple de-interlace without motion compensation,
  139. which is activated if the sink pad's field type is an interlaced type,
  140. and the IDMAC source pad field type is set to none.
  141. This subdev can generate the following event when enabling the second
  142. IDMAC source pad:
  143. - V4L2_EVENT_IMX_FRAME_INTERVAL_ERROR
  144. The user application can subscribe to this event from the ipuX_csiY
  145. subdev node. This event is generated by the Frame Interval Monitor
  146. (see below for more on the FIM).
  147. Cropping in ipuX_csiY
  148. ---------------------
  149. The CSI supports cropping the incoming raw sensor frames. This is
  150. implemented in the ipuX_csiY entities at the sink pad, using the
  151. crop selection subdev API.
  152. The CSI also supports fixed divide-by-two downscaling indepently in
  153. width and height. This is implemented in the ipuX_csiY entities at
  154. the sink pad, using the compose selection subdev API.
  155. The output rectangle at the ipuX_csiY source pad is the same as
  156. the compose rectangle at the sink pad. So the source pad rectangle
  157. cannot be negotiated, it must be set using the compose selection
  158. API at sink pad (if /2 downscale is desired, otherwise source pad
  159. rectangle is equal to incoming rectangle).
  160. To give an example of crop and /2 downscale, this will crop a
  161. 1280x960 input frame to 640x480, and then /2 downscale in both
  162. dimensions to 320x240 (assumes ipu1_csi0 is linked to ipu1_csi0_mux):
  163. .. code-block:: none
  164. media-ctl -V "'ipu1_csi0_mux':2[fmt:UYVY2X8/1280x960]"
  165. media-ctl -V "'ipu1_csi0':0[crop:(0,0)/640x480]"
  166. media-ctl -V "'ipu1_csi0':0[compose:(0,0)/320x240]"
  167. Frame Skipping in ipuX_csiY
  168. ---------------------------
  169. The CSI supports frame rate decimation, via frame skipping. Frame
  170. rate decimation is specified by setting the frame intervals at
  171. sink and source pads. The ipuX_csiY entity then applies the best
  172. frame skip setting to the CSI to achieve the desired frame rate
  173. at the source pad.
  174. The following example reduces an assumed incoming 60 Hz frame
  175. rate by half at the IDMAC output source pad:
  176. .. code-block:: none
  177. media-ctl -V "'ipu1_csi0':0[fmt:UYVY2X8/640x480@1/60]"
  178. media-ctl -V "'ipu1_csi0':2[fmt:UYVY2X8/640x480@1/30]"
  179. Frame Interval Monitor in ipuX_csiY
  180. -----------------------------------
  181. The adv718x decoders can occasionally send corrupt fields during
  182. NTSC/PAL signal re-sync (too little or too many video lines). When
  183. this happens, the IPU triggers a mechanism to re-establish vertical
  184. sync by adding 1 dummy line every frame, which causes a rolling effect
  185. from image to image, and can last a long time before a stable image is
  186. recovered. Or sometimes the mechanism doesn't work at all, causing a
  187. permanent split image (one frame contains lines from two consecutive
  188. captured images).
  189. From experiment it was found that during image rolling, the frame
  190. intervals (elapsed time between two EOF's) drop below the nominal
  191. value for the current standard, by about one frame time (60 usec),
  192. and remain at that value until rolling stops.
  193. While the reason for this observation isn't known (the IPU dummy
  194. line mechanism should show an increase in the intervals by 1 line
  195. time every frame, not a fixed value), we can use it to detect the
  196. corrupt fields using a frame interval monitor. If the FIM detects a
  197. bad frame interval, the ipuX_csiY subdev will send the event
  198. V4L2_EVENT_IMX_FRAME_INTERVAL_ERROR. Userland can register with
  199. the FIM event notification on the ipuX_csiY subdev device node.
  200. Userland can issue a streaming restart when this event is received
  201. to correct the rolling/split image.
  202. The ipuX_csiY subdev includes custom controls to tweak some dials for
  203. FIM. If one of these controls is changed during streaming, the FIM will
  204. be reset and will continue at the new settings.
  205. - V4L2_CID_IMX_FIM_ENABLE
  206. Enable/disable the FIM.
  207. - V4L2_CID_IMX_FIM_NUM
  208. How many frame interval measurements to average before comparing against
  209. the nominal frame interval reported by the sensor. This can reduce noise
  210. caused by interrupt latency.
  211. - V4L2_CID_IMX_FIM_TOLERANCE_MIN
  212. If the averaged intervals fall outside nominal by this amount, in
  213. microseconds, the V4L2_EVENT_IMX_FRAME_INTERVAL_ERROR event is sent.
  214. - V4L2_CID_IMX_FIM_TOLERANCE_MAX
  215. If any intervals are higher than this value, those samples are
  216. discarded and do not enter into the average. This can be used to
  217. discard really high interval errors that might be due to interrupt
  218. latency from high system load.
  219. - V4L2_CID_IMX_FIM_NUM_SKIP
  220. How many frames to skip after a FIM reset or stream restart before
  221. FIM begins to average intervals.
  222. - V4L2_CID_IMX_FIM_ICAP_CHANNEL
  223. - V4L2_CID_IMX_FIM_ICAP_EDGE
  224. These controls will configure an input capture channel as the method
  225. for measuring frame intervals. This is superior to the default method
  226. of measuring frame intervals via EOF interrupt, since it is not subject
  227. to uncertainty errors introduced by interrupt latency.
  228. Input capture requires hardware support. A VSYNC signal must be routed
  229. to one of the i.MX6 input capture channel pads.
  230. V4L2_CID_IMX_FIM_ICAP_CHANNEL configures which i.MX6 input capture
  231. channel to use. This must be 0 or 1.
  232. V4L2_CID_IMX_FIM_ICAP_EDGE configures which signal edge will trigger
  233. input capture events. By default the input capture method is disabled
  234. with a value of IRQ_TYPE_NONE. Set this control to IRQ_TYPE_EDGE_RISING,
  235. IRQ_TYPE_EDGE_FALLING, or IRQ_TYPE_EDGE_BOTH to enable input capture,
  236. triggered on the given signal edge(s).
  237. When input capture is disabled, frame intervals will be measured via
  238. EOF interrupt.
  239. ipuX_vdic
  240. ---------
  241. The VDIC carries out motion compensated de-interlacing, with three
  242. motion compensation modes: low, medium, and high motion. The mode is
  243. specified with the menu control V4L2_CID_DEINTERLACING_MODE. It has
  244. two sink pads and a single source pad.
  245. The direct sink pad receives from an ipuX_csiY direct pad. With this
  246. link the VDIC can only operate in high motion mode.
  247. When the IDMAC sink pad is activated, it receives from an output
  248. or mem2mem device node. With this pipeline, it can also operate
  249. in low and medium modes, because these modes require receiving
  250. frames from memory buffers. Note that an output or mem2mem device
  251. is not implemented yet, so this sink pad currently has no links.
  252. The source pad routes to the IC pre-processing entity ipuX_ic_prp.
  253. ipuX_ic_prp
  254. -----------
  255. This is the IC pre-processing entity. It acts as a router, routing
  256. data from its sink pad to one or both of its source pads.
  257. It has a single sink pad. The sink pad can receive from the ipuX_csiY
  258. direct pad, or from ipuX_vdic.
  259. This entity has two source pads. One source pad routes to the
  260. pre-process encode task entity (ipuX_ic_prpenc), the other to the
  261. pre-process viewfinder task entity (ipuX_ic_prpvf). Both source pads
  262. can be activated at the same time if the sink pad is receiving from
  263. ipuX_csiY. Only the source pad to the pre-process viewfinder task entity
  264. can be activated if the sink pad is receiving from ipuX_vdic (frames
  265. from the VDIC can only be processed by the pre-process viewfinder task).
  266. ipuX_ic_prpenc
  267. --------------
  268. This is the IC pre-processing encode entity. It has a single sink
  269. pad from ipuX_ic_prp, and a single source pad. The source pad is
  270. routed to a capture device node, with a node name of the format
  271. "ipuX_ic_prpenc capture".
  272. This entity performs the IC pre-process encode task operations:
  273. color-space conversion, resizing (downscaling and upscaling),
  274. horizontal and vertical flip, and 90/270 degree rotation. Flip
  275. and rotation are provided via standard V4L2 controls.
  276. Like the ipuX_csiY IDMAC source, it can also perform simple de-interlace
  277. without motion compensation, and pixel reordering.
  278. ipuX_ic_prpvf
  279. -------------
  280. This is the IC pre-processing viewfinder entity. It has a single sink
  281. pad from ipuX_ic_prp, and a single source pad. The source pad is routed
  282. to a capture device node, with a node name of the format
  283. "ipuX_ic_prpvf capture".
  284. It is identical in operation to ipuX_ic_prpenc, with the same resizing
  285. and CSC operations and flip/rotation controls. It will receive and
  286. process de-interlaced frames from the ipuX_vdic if ipuX_ic_prp is
  287. receiving from ipuX_vdic.
  288. Like the ipuX_csiY IDMAC source, it can perform simple de-interlace
  289. without motion compensation. However, note that if the ipuX_vdic is
  290. included in the pipeline (ipuX_ic_prp is receiving from ipuX_vdic),
  291. it's not possible to use simple de-interlace in ipuX_ic_prpvf, since
  292. the ipuX_vdic has already carried out de-interlacing (with motion
  293. compensation) and therefore the field type output from ipuX_ic_prp can
  294. only be none.
  295. Capture Pipelines
  296. -----------------
  297. The following describe the various use-cases supported by the pipelines.
  298. The links shown do not include the backend sensor, video mux, or mipi
  299. csi-2 receiver links. This depends on the type of sensor interface
  300. (parallel or mipi csi-2). So these pipelines begin with:
  301. sensor -> ipuX_csiY_mux -> ...
  302. for parallel sensors, or:
  303. sensor -> imx6-mipi-csi2 -> (ipuX_csiY_mux) -> ...
  304. for mipi csi-2 sensors. The imx6-mipi-csi2 receiver may need to route
  305. to the video mux (ipuX_csiY_mux) before sending to the CSI, depending
  306. on the mipi csi-2 virtual channel, hence ipuX_csiY_mux is shown in
  307. parenthesis.
  308. Unprocessed Video Capture:
  309. --------------------------
  310. Send frames directly from sensor to camera device interface node, with
  311. no conversions, via ipuX_csiY IDMAC source pad:
  312. -> ipuX_csiY:2 -> ipuX_csiY capture
  313. IC Direct Conversions:
  314. ----------------------
  315. This pipeline uses the preprocess encode entity to route frames directly
  316. from the CSI to the IC, to carry out scaling up to 1024x1024 resolution,
  317. CSC, flipping, and image rotation:
  318. -> ipuX_csiY:1 -> 0:ipuX_ic_prp:1 -> 0:ipuX_ic_prpenc:1 -> ipuX_ic_prpenc capture
  319. Motion Compensated De-interlace:
  320. --------------------------------
  321. This pipeline routes frames from the CSI direct pad to the VDIC entity to
  322. support motion-compensated de-interlacing (high motion mode only),
  323. scaling up to 1024x1024, CSC, flip, and rotation:
  324. -> ipuX_csiY:1 -> 0:ipuX_vdic:2 -> 0:ipuX_ic_prp:2 -> 0:ipuX_ic_prpvf:1 -> ipuX_ic_prpvf capture
  325. Usage Notes
  326. -----------
  327. To aid in configuration and for backward compatibility with V4L2
  328. applications that access controls only from video device nodes, the
  329. capture device interfaces inherit controls from the active entities
  330. in the current pipeline, so controls can be accessed either directly
  331. from the subdev or from the active capture device interface. For
  332. example, the FIM controls are available either from the ipuX_csiY
  333. subdevs or from the active capture device.
  334. The following are specific usage notes for the Sabre* reference
  335. boards:
  336. SabreLite with OV5642 and OV5640
  337. --------------------------------
  338. This platform requires the OmniVision OV5642 module with a parallel
  339. camera interface, and the OV5640 module with a MIPI CSI-2
  340. interface. Both modules are available from Boundary Devices:
  341. - https://boundarydevices.com/product/nit6x_5mp
  342. - https://boundarydevices.com/product/nit6x_5mp_mipi
  343. Note that if only one camera module is available, the other sensor
  344. node can be disabled in the device tree.
  345. The OV5642 module is connected to the parallel bus input on the i.MX
  346. internal video mux to IPU1 CSI0. It's i2c bus connects to i2c bus 2.
  347. The MIPI CSI-2 OV5640 module is connected to the i.MX internal MIPI CSI-2
  348. receiver, and the four virtual channel outputs from the receiver are
  349. routed as follows: vc0 to the IPU1 CSI0 mux, vc1 directly to IPU1 CSI1,
  350. vc2 directly to IPU2 CSI0, and vc3 to the IPU2 CSI1 mux. The OV5640 is
  351. also connected to i2c bus 2 on the SabreLite, therefore the OV5642 and
  352. OV5640 must not share the same i2c slave address.
  353. The following basic example configures unprocessed video capture
  354. pipelines for both sensors. The OV5642 is routed to ipu1_csi0, and
  355. the OV5640, transmitting on MIPI CSI-2 virtual channel 1 (which is
  356. imx6-mipi-csi2 pad 2), is routed to ipu1_csi1. Both sensors are
  357. configured to output 640x480, and the OV5642 outputs YUYV2X8, the
  358. OV5640 UYVY2X8:
  359. .. code-block:: none
  360. # Setup links for OV5642
  361. media-ctl -l "'ov5642 1-0042':0 -> 'ipu1_csi0_mux':1[1]"
  362. media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]"
  363. media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]"
  364. # Setup links for OV5640
  365. media-ctl -l "'ov5640 1-0040':0 -> 'imx6-mipi-csi2':0[1]"
  366. media-ctl -l "'imx6-mipi-csi2':2 -> 'ipu1_csi1':0[1]"
  367. media-ctl -l "'ipu1_csi1':2 -> 'ipu1_csi1 capture':0[1]"
  368. # Configure pads for OV5642 pipeline
  369. media-ctl -V "'ov5642 1-0042':0 [fmt:YUYV2X8/640x480 field:none]"
  370. media-ctl -V "'ipu1_csi0_mux':2 [fmt:YUYV2X8/640x480 field:none]"
  371. media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/640x480 field:none]"
  372. # Configure pads for OV5640 pipeline
  373. media-ctl -V "'ov5640 1-0040':0 [fmt:UYVY2X8/640x480 field:none]"
  374. media-ctl -V "'imx6-mipi-csi2':2 [fmt:UYVY2X8/640x480 field:none]"
  375. media-ctl -V "'ipu1_csi1':2 [fmt:AYUV32/640x480 field:none]"
  376. Streaming can then begin independently on the capture device nodes
  377. "ipu1_csi0 capture" and "ipu1_csi1 capture". The v4l2-ctl tool can
  378. be used to select any supported YUV pixelformat on the capture device
  379. nodes, including planar.
  380. SabreAuto with ADV7180 decoder
  381. ------------------------------
  382. On the SabreAuto, an on-board ADV7180 SD decoder is connected to the
  383. parallel bus input on the internal video mux to IPU1 CSI0.
  384. The following example configures a pipeline to capture from the ADV7180
  385. video decoder, assuming NTSC 720x480 input signals, with Motion
  386. Compensated de-interlacing. Pad field types assume the adv7180 outputs
  387. "interlaced". $outputfmt can be any format supported by the ipu1_ic_prpvf
  388. entity at its output pad:
  389. .. code-block:: none
  390. # Setup links
  391. media-ctl -l "'adv7180 3-0021':0 -> 'ipu1_csi0_mux':1[1]"
  392. media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]"
  393. media-ctl -l "'ipu1_csi0':1 -> 'ipu1_vdic':0[1]"
  394. media-ctl -l "'ipu1_vdic':2 -> 'ipu1_ic_prp':0[1]"
  395. media-ctl -l "'ipu1_ic_prp':2 -> 'ipu1_ic_prpvf':0[1]"
  396. media-ctl -l "'ipu1_ic_prpvf':1 -> 'ipu1_ic_prpvf capture':0[1]"
  397. # Configure pads
  398. media-ctl -V "'adv7180 3-0021':0 [fmt:UYVY2X8/720x480]"
  399. media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/720x480 field:interlaced]"
  400. media-ctl -V "'ipu1_csi0':1 [fmt:AYUV32/720x480 field:interlaced]"
  401. media-ctl -V "'ipu1_vdic':2 [fmt:AYUV32/720x480 field:none]"
  402. media-ctl -V "'ipu1_ic_prp':2 [fmt:AYUV32/720x480 field:none]"
  403. media-ctl -V "'ipu1_ic_prpvf':1 [fmt:$outputfmt field:none]"
  404. Streaming can then begin on the capture device node at
  405. "ipu1_ic_prpvf capture". The v4l2-ctl tool can be used to select any
  406. supported YUV or RGB pixelformat on the capture device node.
  407. This platform accepts Composite Video analog inputs to the ADV7180 on
  408. Ain1 (connector J42).
  409. SabreSD with MIPI CSI-2 OV5640
  410. ------------------------------
  411. Similarly to SabreLite, the SabreSD supports a parallel interface
  412. OV5642 module on IPU1 CSI0, and a MIPI CSI-2 OV5640 module. The OV5642
  413. connects to i2c bus 1 and the OV5640 to i2c bus 2.
  414. The device tree for SabreSD includes OF graphs for both the parallel
  415. OV5642 and the MIPI CSI-2 OV5640, but as of this writing only the MIPI
  416. CSI-2 OV5640 has been tested, so the OV5642 node is currently disabled.
  417. The OV5640 module connects to MIPI connector J5 (sorry I don't have the
  418. compatible module part number or URL).
  419. The following example configures a direct conversion pipeline to capture
  420. from the OV5640, transmitting on MIPI CSI-2 virtual channel 1. $sensorfmt
  421. can be any format supported by the OV5640. $sensordim is the frame
  422. dimension part of $sensorfmt (minus the mbus pixel code). $outputfmt can
  423. be any format supported by the ipu1_ic_prpenc entity at its output pad:
  424. .. code-block:: none
  425. # Setup links
  426. media-ctl -l "'ov5640 1-003c':0 -> 'imx6-mipi-csi2':0[1]"
  427. media-ctl -l "'imx6-mipi-csi2':2 -> 'ipu1_csi1':0[1]"
  428. media-ctl -l "'ipu1_csi1':1 -> 'ipu1_ic_prp':0[1]"
  429. media-ctl -l "'ipu1_ic_prp':1 -> 'ipu1_ic_prpenc':0[1]"
  430. media-ctl -l "'ipu1_ic_prpenc':1 -> 'ipu1_ic_prpenc capture':0[1]"
  431. # Configure pads
  432. media-ctl -V "'ov5640 1-003c':0 [fmt:$sensorfmt field:none]"
  433. media-ctl -V "'imx6-mipi-csi2':2 [fmt:$sensorfmt field:none]"
  434. media-ctl -V "'ipu1_csi1':1 [fmt:AYUV32/$sensordim field:none]"
  435. media-ctl -V "'ipu1_ic_prp':1 [fmt:AYUV32/$sensordim field:none]"
  436. media-ctl -V "'ipu1_ic_prpenc':1 [fmt:$outputfmt field:none]"
  437. Streaming can then begin on "ipu1_ic_prpenc capture" node. The v4l2-ctl
  438. tool can be used to select any supported YUV or RGB pixelformat on the
  439. capture device node.
  440. Known Issues
  441. ------------
  442. 1. When using 90 or 270 degree rotation control at capture resolutions
  443. near the IC resizer limit of 1024x1024, and combined with planar
  444. pixel formats (YUV420, YUV422p), frame capture will often fail with
  445. no end-of-frame interrupts from the IDMAC channel. To work around
  446. this, use lower resolution and/or packed formats (YUYV, RGB3, etc.)
  447. when 90 or 270 rotations are needed.
  448. File list
  449. ---------
  450. drivers/staging/media/imx/
  451. include/media/imx.h
  452. include/linux/imx-media.h
  453. References
  454. ----------
  455. .. [#f1] http://www.nxp.com/assets/documents/data/en/reference-manuals/IMX6DQRM.pdf
  456. .. [#f2] http://www.nxp.com/assets/documents/data/en/reference-manuals/IMX6SDLRM.pdf
  457. Authors
  458. -------
  459. - Steve Longerbeam <steve_longerbeam@mentor.com>
  460. - Philipp Zabel <kernel@pengutronix.de>
  461. - Russell King <linux@armlinux.org.uk>
  462. Copyright (C) 2012-2017 Mentor Graphics Inc.