Browse Tag

jpeg

JPEG encoding on the Odroid C1/C1+

The Odroid C1+ is one of the most cost-effective yet still reasonably powerful boards out there that I know, featuring the Amlogic 32-bit S805 ARM processor. Although the Odroid C1+ is meant to be as open as possible, the documentation is quite short (if not inexistent) when it comes to video hardware encoding or decoding.

odroidc1

On that line, one of the advertised features of the processor is the ability to do JPEG hardware encoding. Unfortunately, you don’t get much more than the advertisement itself really. I think there is some Android library around that helps you get it done, but anyway, here is how you do it regardless of your Linux flavour.

Just a quick note: I say this is how you do it, but actually it is not; I only made this up by looking into the driver code, kind of figuring out how it works, and doing a fair amount of trial and error, so no warranties are given! You can find the driver in the file drivers/amlogic/amports/jpegenc.c of the Amlogic kernel tree.

Getting the board ready for action

So, let’s get the board to do some encoding. First thing you need to do, is to make sure you have JPEG encoding enabled on your kernel. As of this writing, the latest pre-built kernel has the module built-in (kernel build option AM_JPEG_ENCODER), so you probably don’t need to rebuild your kernel.

https://github.com/hardkernel/linux/tree/odroidc-3.10.y

Unfortunately, the encoding device itself is not enabled. In order to enable it, you need to add the following to your device tree:

jpegenc {
    compatible = "amlogic,jpegenc";
    dev_name = "jpegenc.0";
    status = "okay";
    reserve-memory = <0x01800000>;  //24M
    reserve-iomap = "true";
};

If you don’t know what I am talking about, but still want to try hardware encoding, despair not. Here is a jpegenc-enabled compiled version of the C1+ device tree, tested on kernels 3.10.80-121 and 3.10.80-131 (and with good chances to work on any odroidc-3.10.y kernel). Just download it, copy it to the boot partition (normally mounted under /media/boot/) with the name ‘meson8b_odroidc.dtb’, and reboot the board (remember to make a backup copy of the old ‘meson8b_odroidc.dtb’ file).

Licenses are important! Like the rest of the kernel, device trees are released under the GPLv2 license, so here is the corresponding source code of the compiled device tree. (If you don’t think this is important, keep in mind that quite often it is thanks to copyleft licenses that we users can hack with things like the jpegenc module!).

If everything has gone well, you should be able to see the file /dev/jpegenc in your file tree. That’s the file we will be using to write our raw image data, and read our encoded jpeg image.

Module memory allocation

Let’s stop for a moment and have a look at how the driver allocates and uses memory. This is not needed to start doing your own encoding, so feel free to jump to the next section.

The jpegenc module requests a single big buffer and does all the work on it. It uses contiguous memory allocation (CMA), which means the memory is physically continuous on memory. The module splits the buffer into two parts, the first for raw data input, and the other for encoded JPEG output. You can see the buffer definitions at the top of the jpegenc.c file. In our case it looks like this:

    },{
        .lev_id = JPEGENC_BUFFER_LEVEL_5M,
        .max_width = 2624,
        .max_height = 2624,
        .min_buffsize = 0x1800000,
        .input = {
            .buf_start = 0,
            .buf_size = 0x13B3000,
        },
        .bitstream = {
            .buf_start = 0x1400000,
            .buf_size = 0x400000,
        }
    },{

I say ‘our case’, because there are other buffer definitions depending on how much memory the module requests. The buffer size here is 24 MB (0x1800000), defined by min_buffsize. It also defines the maximum width and height for the image, and the input and output sub-buffers. Note that output is referred to as bitstream

If you need to work with bigger images, you should change the reserve-memory attribute on the device tree, see previous section. With the right setting, you can go as high as 8192×8192. However, though I didn’t test it myself, you may need to recompile the module because there seems to be a bug on the bitstream (output) buffer size definition for sizes bigger than ours:

    },{
        .lev_id = JPEGENC_BUFFER_LEVEL_HD,
        .max_width = 8192,
        .max_height = 8192,
        .min_buffsize = 0xc400000,
        .input = {
            .buf_start = 0,
            .buf_size = 0xc000000,
        },
        .bitstream = {
            .buf_start = 0xc000000,
            .buf_size = 0x4000000,		/* Wrong! Drop a zero: 0x400000 */
        }
    }

Anyway, back to our discussion: In order to work with the jpegenc module, we will be mapping this big buffer, and use it to write our raw image and read the encoded output image. We already know the offset for the output (bitstream) buffer (.buf_start = 0x1400000,), but userspace programs should get this information by querying the module with the right ioctl() call, as we’ll see below.

Let’s code!

To do our encoding, we start by opening the /dev/jpegenc file:

#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/ioctl.h>
#include <sys/mman.h>
#include <errno.h>
#include <stdio.h>
#include <string.h>

#include "jpegenc.h"

int main()
{
	/* Open the jpeg encoder file. It may require root permissions */
	int fd = open("/dev/jpegenc", O_RDWR);
	if(fd == -1)
	{
		perror("Error on jpegenc open");
		return 1;
	}

The file jpegenc.h must be included here. It can be found in the Amlogic kernel tree, or here. I copy it to my local folder, since it doesn’t seem to be among the installed linux headers.

Once the jpegenc file is opened, we can query the module buffer info (as discussed in the previous section):

	/* Query and print buffer info */
	unsigned int addr_info[5];
	ioctl(fd, JPEGENC_IOC_GET_BUFFINFO, addr_info);
	printf("Total buffer size: 0x%08X\n"
		"Input buffer -> start: 0x%08X, size: 0x%08X\n"
		"Bitstream (output) buffer -> start: 0x%08X, size: 0x%08X\n",
		addr_info[0], addr_info[1], addr_info[2],
		addr_info[3], addr_info[4]);

I just pass an array of unsigned int to the ioctl() call, but it may be better to define a proper struct. The printf() call prints what every number means.

The following ioctl() tells the module to initialize the encoder (things like power-on, canvas initialization, microcode loading and interrupt request).

	ioctl(fd, JPEGENC_IOC_CONFIG_INIT, 0);

Next, we tell the driver what image size we will be using:

	/* Set image width and height */
	int width, height;
	width = 1280;
	height = 720;
	ioctl(fd, JPEGENC_IOC_SET_ENCODER_WIDTH, &width);
	ioctl(fd, JPEGENC_IOC_SET_ENCODER_HEIGHT, &height);

With our current device tree setting, we could go as high as 2624×2624, though I haven’t tried it myself.
It is now a good time to map the module buffer to our memory space:

	/* Map the jpegenc module "big buffer" to userspace */
	unsigned char *data;
	data = mmap((caddr_t)0, addr_info[0], PROT_READ | PROT_WRITE,
			MAP_SHARED, fd, 0);
	if (data == (void*)-1)
	{
		perror("Error on mmap");
		return 1;
	}

At this point, the module is expecting us to put our raw frame at data + offset, where offset is whatever value we have at addr_info[1] (see the JPEGENC_IOC_GET_BUFFINFO ioctl() call above). However, this value (addr_info[1]) happens to be zero for any device tree configuration, at least with kernels up to 3.10.80-131, so really you could very much ignore it.

	/* Load the raw image to the mapped buffer */
	FILE *fin = fopen("image.raw", "r");
	int ret = fread(data + addr_info[1], 1, 1280*720*3, fin);
	if (ret != 1280*720*3)
	{
		printf("Could not read full frame size\n");
		return 1;
	}

Here, I load my raw data from a file, but obviously it could be loaded from anywhere.

Even though my raw image is encoded as RGB888, you may well use any of the following color configurations, as shown in jpegenc.h (I’m afraid I didn’t test any other than RGB888 so far, comments are welcome if you successfully use another):

typedef enum{
    FMT_YUV422_SINGLE = 0,
    FMT_YUV444_SINGLE,
    FMT_NV21,
    FMT_NV12,
    FMT_YUV420,    
    FMT_YUV444_PLANE,
    FMT_RGB888,
    FMT_RGB888_PLANE,
    FMT_RGB565,
    FMT_RGBA8888,
    MAX_FRAME_FMT 
}jpegenc_frame_fmt;

Once we passed our raw data to the jpegenc driver, we can ask it to start encoding. To do so, we use an array (or a properly defined struct) of 7 unsigned int. The following code shows the meaning of them.

	/* Start encoding now */
	unsigned int encode_cmd[7];
	encode_cmd[0] = LOCAL_BUFF;		//memory type: LOCAL_BUFF probably,
						//PHYSICAL_BUFF or  CANVAS_BUFF don't seem to 
						//be possible from userspace.
	encode_cmd[1] = FMT_RGB888;		//Input format
	encode_cmd[2] = FMT_YUV422_SINGLE;	//Output format
	encode_cmd[3] = 0;			//Ignored
	encode_cmd[4] = 0;			//offset: for LOCAL_BUFF, set to 0
	encode_cmd[5] = width*height*3;	//Raw image size in bytes
	encode_cmd[6] = 1;			//need_flush: set to 1 for LOCAL_BUFF, maybe not needed.

	ioctl(fd, JPEGENC_IOC_NEW_CMD, encode_cmd);

As you see, there is a lot of guessing from myself involved here. In the driver (jpegenc.c), these values are passed to a function called set_jpeg_input_format(), refer to it if you are interested on what these values do.

If you are trying to encode a non RBG888 frame, you will need to change the value passed in encode_cmd[1] with one of the values from the enum above.

As far as I know, all the ioctl() calls we have made so far returned once the action was complete. This is not the case here: JPEGENC_IOC_NEW_CMD is a “non-blocking” command. So we will have to poll until completion, using the JPEGENC_IOC_GET_STAGE ioctl():

	/* Loop wait for the encoder to finish */
	unsigned int stage = ENCODER_IDLE;
	while (stage != ENCODER_DONE)
	{
		ioctl(fd, JPEGENC_IOC_GET_STAGE, &stage);
		printf("Stage is %d\n", stage);
		usleep(1000);
	}
	printf("Job done!\n");

Once the driver returns ENCODER_DONE, we are ready to grab our compressed JPEG frame! Where should we get it from? Back to the first ioctl() at the top of the code, the value of addr_info[3] was the offset for the bitstream (output) part of the big buffer we mapped with mmap (which we just called data here).

So, we can read our compressed frame at data + addr_info[3] but, where does it end? Conveniently, we can query the size of the output data:

	/* Query the output size */
	unsigned int output_size = 0;
	ioctl(fd, JPEGENC_IOC_GET_OUTPUT_SIZE, &output_size);
	printf("Output size is %d bytes\n", output_size);

And that’s it! Here is an example of saving the encoded frame into a file:

	/* Write the image to file */
	FILE *fout = fopen("./image.jpeg", "w+");
	int written = fwrite(data + addr_info[3], 1, output_size, fout);
	if (written != output_size)
	{
		printf("Could not write the full output file\n");
		return 1;
	}

You can get the full example code here. Remember to put jpegenc.h in the same folder. You can compile the example from the Odroid itself with:

$ gcc main.c -o myencoder

Other stuff

Although this code needs to be run as root, you can easily change that by running:

$ sudo chmod 666 /dev/jpegenc

That should allow any user doing some encoding.

On my board, the code loops 6 times on the wait loop before finishing, which means it takes roughly 6ms to encode a 720p image. If things scale up nicely, that means it possibly could do 100fps+ MJPEG encoding, with very little use of the CPU! (And, hopefully, not heating up too much!)

Finally, there are a bunch of ioctl() I haven’t discussed here, since this is only a basic example. In fact, I haven’t even tried them myself, but it seems to be possible to set different qualities, or tell the encoder to use your own quantization table (I’ll leave that to you, intrepid reader).

Please, leave me any comments for any corrections you may spot, or any progress you do on your own with the Odroid C1+ video hardware!

Happy hacking!

4 Comments