Jaggy arfifacts are introduced by the mismatching between the mismatching of neighbouring pixels. In YUV color space, the focus is on Y component.
Some jaggy artifact is unavoidable. For instance, when the WEAVE (field combination) deinterlacing is deployed, any change between fields will result “jaggies”, as the pixels in one field do not line up with the pixels in the other.
I met one issue, where UYVY output is good, while YUYV output shows apparent jaggies. First thing coming into my mind is: do the Ys get reversed while being output? Y0U0Y1V1–>Y1UxY0Vx?
The lucky thing is we can route the output to the input interface and capture the raw data to analyse. The raw data clearly shows that Y1,Y3, are not there, but Y0, Y2, are there for twice.
Use a hex editor to open the raw YUV file,
at address 0xC50, uyvy file shows “87 59 7E 5C”; yuyv file shows “59 87 59 7E”.
Screen is a compositing windowing system. It is able to combine multiple content sources together into a single image.
Two types of composition:
- Hardware composition: composes all visible(enabled) pipelines at display time.
- In order to use this,
- You need specify a pipeline for your window: use screen_set_window_property_iv().
- use screen_set_window_property_iv() to set the SCREEN_USAGE_OVERLAY bit of your SCREEN_PROPERTY_USAGE window property.
- The window is considered autonomous as no composition was performed (on the buffers, which belong to this window) by the composition manager.
- For a window to be displayed autonomously on a pipeline, this window buffer’s format must be supported by its associated pipeline.
- Composition manager: Composes multiple window buffers (belong to multiple windows) into a single buffer, which is associated to a pipeline.
- The single buffer is called /composite buffer/ screen framebuffer.
- Used when your platform doesn’t have hardware capabilities to support a sufficient number of pipelines to compose a number of required elements, or to support a particular behavior,
- One pipeline is involved (you don’t specify the pipeline number and OVERLAY usage).
- Requires processing power of CPU and/or GPU to compose buffers
Note:Pipeline (in display controller) equals to layer (in composition manager), which is indexed by EGL level of app.
Pipeline ordering (Hardware property) and z-ordering (for windows)
- Pipeline ordering and the z-ordering of windows on a layer are applied independently of each other.
- Pipeline ordering takes precedence over z-ordering operations in Screen. Screen does not have control over the ordering of hardware pipelines. Screen windows are always arranged in the z-order that is specified by the application.
- If your application manually assigns pipelines, you must ensure that the z-order values make sense with regard to the pipeline order of the target hardware. For example, if you assign a high z-order value to a window (meaning it is to be placed in the foreground), then you must make a corresponding assignment of this window to a top layer pipeline. Otherwise the result may not be what you expect, regardless of the z-order value.
Window: a window represents the fundamental drawing surface.
- An application needs use multiple windows when content comes from different sources, when one or more parts of the application must be updated independently from others, or when the application tries to target multiple displays.
- To use the same window, the content must have the same FORMAT, DISPLAY, BRIGHTNESS, PIPELINE, POSITION, SIZE, SOURCE_POSITION, SOURCE_SIZE, TRANSPARENCY, ZORDER, etc.
Pixmap: A pixmap is similar to a bitmap except that it can have multiple bits per pixel (a measurement of the depth of the pixmap) that store the intensity or color component values. Bitmaps, by contrast, have a depth of one bit per pixel.
- You can draw directly onto a pixmap surface, outside the viewable area, and then copy the pixmap to a buffer later on.
Note: Multiple buffers can be associated with a window whereas only one buffer can be associated with a pixmap.
Endianness affects how you store a 32-bit(4-byte) value into memory.
For example, you have 0x90 AB 12 CD,
In big endian, you store the most significant byte in the smallest address.
90 AB 12 CD
In little endian, you store the least significant byte in the smallest address.
CD 12 AB 90
You have source code, and want to build it. you need give the compiler the instructions on how to build the code: common.mk and Makefile.
An useful binary “addvariant” will help do the magic.
- first, using the option “-i” creates the initial common.mk and Makefile in the current working directory.
- second, add directories as you need, without supplying “-i” option.
- Or the two steps can be combined together if you just need one level of directory.
Example 1: addvariant -i OS/CPU/VARIANT nto arm dll.le.v7
- Create a Makefile in the current directory, with contents:
LIST=OS CPU VARIANT
- Create nto-arm-dll.le.v7 directory, with Makefile in this directory says “include ../common.mk”.
Example 2: addvariant -i OS
- create a common.mk.
- but no sub-directory created yet until addvariant nto arm dll.le.v7. this will create the directory nto, nto/arm, nto/arm/dll.le.v7, with Makefile inside each directory,
- nto/Makefile: LIST= CPU;
- nto/arm/Makefile: LIST=VARIANT;
- nto/arm/dll.le.v7/Makefile: include ../../../common.mk
- other directories will be added, if you do “addvariant nto x86 dll”, “addvariant nto arm dll.le.v7.hbas”, etc.
- in the latter case, compiler will add CCFLAGS “-DVARIANT_dll –DVARIANT_le –DVARIANT_v7 -Dhbas”.
- But “addvariant nto arm dll.le.v7.hbas adv7280m” gives an error: “too many directory levels specified”.
- you can still add extra level of VARIANT manually, by:
- add variant nto arm dll.le.v7.hbas first,
- then create adv7280m folder under dll.le.v7.hbas,
- create a makefile with “LIST=VARIANT” in dll.le.v7.hbas,
- and put a makefile with “include ../../../../common.mk” in adv7280m folder.
- The complier will use CCFLAGS “-DVARIANT_adv7280m -DVARIANT_dll –DVARIANT_le –DVARIANT_v7 –DVARAINT_hbas”.
note: Have a file “i2c.c”, stored in the variant directory, the compiler with choose this i2c.c to compile, instead of the one in the main directory.
More on the executable name and install path:
- The default install directory is “lib/dll”, unless you add “INSTALLDIR=/usr/lib” in common.mk.
- all the variant names, except dll, le, v7, etc, will be appended to the final name, by default.
- e.g. in the last example above, the executable would be “adv728xm-adv7280m-hbas.so” (project-variant1-variant2.so)
- If you have “NAME=$(IMAGE_PREF_SO)capture-decoder” in common.mk, you will have a library named “libcapture-decoder-adv7280m-hbas.so”
- If you don’t like the atomatic appending, use:
DEC_VARIANTS := adv7280m adv7281m
EXTRA_SILENT_VARIANTS += $(DEC_VARIANTS)
- You can combine variant names into a compound variant, using a period(.), dash(-) or slash(/) between the variants.
- If you have a variant name “omap4-5”, the compiler will interpret it as “VARIANT_omap4” and “VARIANT_5”. Therefore, you have to use omap45. If you still want “omap4-5” to be a part of the library name, In common.mk,
SOC_VARIANTS := j5 imx6 omap45
SOC_VARIANT_NAME = $(filter &(VARIANT_LIST), $(SOC_VARIANTS))
ifeq ($(SOC_VARIANT_NAME), omap45)
The driver configures DMA to write 2 frames/fields into one buffer before switching to next buffer. Under some condition (e.g. the field being received is out of order, top1—bottom1—bottom2—top3-bottom3…), we don’t’ want DMA to continue writing top3 into the same buffer (buf1) which contains bottom2. What I did: disable the corresponding CSI2 context after bottom2 has been saved in buf_1, provide a new physical address (buf_2) to CSI2 context, and reenable CSi2 context. By doing this, I hope CSI2 DMA will store top3 into buf_2. However, in my test, CSI DMA would continue write the next field (top3) into buf_1, most of the time. What we expected:
———- ———– ———
| t1 | | | | t3 |
——— ———– ———-
| b1 | | b2 | | b3 |
——— ———- ———-
buf_0 buf_1 buf3
What really happens:
———- ———– ———
| t1 | | t3 | | t4 |
——— ———– ———-
| b1 | | b2 | | b3 |
——— ———- ———-
buf_0 buf_1 buf3
The solution is to disable the corresponding interface, to force DMA write to the new buffer after the interface being enabled.
Scenario: capturing is running in the background, and keep making captured buffers available to applications. The application gets one frame, display it, and get another frame, display it…
theory: 60fps capture + 60fps display
in practice: display rate is slightly slower than capture rate.
If the application asks for a captured buffer, only after the previous one being displayed. the frame dropping occurs inside capture part;
if the application doesn’t wait the previous frame being displayed, the frame dropping will occur on the display side.
VSYNC interval is 17.183ms
Capture interval is 16.652ms
My set up is shown below.
Two boards, which are considered identical, except that: board A is coming with camera_A hard-wired, I reworked board B to have a RCA connector.
DVD player is external powered, camera can use the external power supply (camera_B), or battery inside(camera_C), or get the power from the board (camera_A).
There is no noise pattern seen with camera_A + board_A. However, when connecting board_B to
- DVD player, there are noise patterns captured. The fact is,
- The DVD player works well with most boards, even the ones with the same decoder as board B.
- Switch to another DVD player, the noise pattern is still there.
- I did have some experience that increasing drive strength of the decoder avoided the noise patterns, on an old board. However, I have no idea how to avoid it on this board.
- Camera B, noise pattern is seen, and even worse than DVD player: the pattern keeps moving vertically, and the capture module lost sync with the decoder frequently (when the noise pattern moves to the vsync interval, I guess).
I need eliminate the noise patterns of camera, to have some fair comparisons between camera A and camera B.
What I did:
- As the cable between cameras and decoder was re-worked by myself, I thought I brought some noises in while just twisting the wires together.
- However, even I soldered the wires tightly, and made sure they were not exposed in the air (someone told me the wire would act as antenna if exposed in the air, and receive some noise signals, consequently), the noise pattern is still there.
- Moreover, Camera_C doesn’t have noise pattern, even it is connected via the same wires.
- As camera_C doesn’t have this issue, we started looking into the power supply. A few options available:
- Use the power from the board (not done,due to insufficient knowledge of the wire definitions on that board).
- Use an alternate power supply, what we tried:
- A 7.5V power supply with the same DC connector type (the noise is even worse, 3,4 stripes).
- Rework the power cable to get power from laptop, via USB (failed, as USB can provide 5V maximum, but this camera requires a higher voltage (12V)).
- Find a stable power supply.
- Luckily, get one stable power supply. The connectors of this power supply is MOLEX connector. Therefore, cut and twist the wires of MOLEX connector with the camera power cable… and finally the noise pattern disappeared!!!!
- DC connectors: Coaxial connector; Molex connector (4 pins, used for disk driver connectors).
- USB connector pinout: Red is for VCC, black is for ground. White and green are for data.
- Molex connector Pinout: yellow wire: 12V; red wire: 5V, black wires: ground.
Update: there is one more cause which is worth to check: clock jitter resulted by wrong circuit design on the receiver side.
the two reasons above (low drive strength, or power supply) doesn’t introduce noise pattern if it is in Free run mode. If we do see noise even in free-run mode, check the circuit design.
See the image below. Removing R12102 and IC12102, and install R12100, will avoid the green line noise.
the IC is a bilateral switch, which makes the circuit tolerant of slower input rise and fall times when OE is low.