real-time compression algorithms on fpga

M

Melanie Nasic

Guest
Hello community,

I am thinking about implementing a real-time compression scheme on an FPGA
working at about 500 Mhz. Facing the fact that there is no "universal
compression" algorithm that can compress data regardless of its structure
and statistics I assume compressing grayscale image data. The image data is
delivered line-wise, meaning that one horizontal line is processed, than
the next one, a.s.o.
Because of the high data rate I cannot spend much time on DFT or DCT and on
data modelling. What I am looking for is a way to compress the pixel data in
spatial not spectral domain because of latency aspects, processing
complexity, etc. Because of the sequential data transmission line by line a
block matching is also not possible in my opinion. The compression ratio is
not so important, factor 2:1 would be sufficient. What really matters is the
real time capability. The algorithm should be pipelineable and fast. The
memory requirements should not exceed 1 kb.
What "standard" compression schemes would you recommend? Are there
potentialities for a non-standard "own solution"?

Thank you for your comments.

Regards, Melanie
 
I attended Altera's DSP showcase/seminar in Rochester, there was a
large presence for interest in implementing MPEG4 (section 10 or 11, I
can't remember) for HDTV applications. You didn't say if the advice
you are searching for is for a commercial product, R&D science project
or a student project but even implementing something real time for DVD
quality (720x480) is worth considering.

I think a while back somebody announced the availability of JPEG
library on the www.opencores.org,
http://www.opencores.org/projects.cgi/web/jpeg/overview I haven't
tried it but critiquing it could be a good place to start. You could
incorporate parts of its implementation into yours, omit the parts you
don't agree with, and foster new not present in it. There is also a
section Video Compress Systems,
http://www.opencores.org/projects.cgi/web/video_systems/overview that
would be worth taking a look at.

A slightly different approach is using a tool like ImpulseC
(www.impulsec.com). It isn't really C but it is close. It allows you
to efficiently manage parallel systems of functions and integrate them
into VHDL and Verilog designs.

Maybe it is the bubble I have been living in, but your clock speed
seems high, what device families are you considering? I have seen 80
Mhz designs outperform similar applications on gigahertz PC. Don't
let PC marketing skew your objectivity (or maybe it is there choice in
operating systems).

Could you tell us more about the purpose of your project and you end
application?

Derek
 
Melanie Nasic wrote:
Hello community,

...
What "standard" compression schemes would you recommend? Are there
potentialities for a non-standard "own solution"?
I would think of something like:
- run length encoding
- arithmetic coding
- maybe a dictionary encoding like zip
- if you know some statistics about the values you want to compress you could
also try if huffman coding is sufficent

Regards
Stefan

Thank you for your comments.

Regards, Melanie
 
"Melanie Nasic" <quinn_the_esquimo@freenet.de> writes:
Because of the high data rate I cannot spend much time on DFT or DCT
and on data modelling. What I am looking for is a way to compress
the pixel data in spatial not spectral domain because of latency
aspects, processing complexity, etc. Because of the sequential data
transmission line by line a block matching is also not possible in
my opinion. The compression ratio is not so important, factor 2:1
would be sufficient. What really matters is the real time
capability. The algorithm should be pipelineable and fast. The
memory requirements should not exceed 1 kb.
You don't say anything about quality.

Here's C code for a lossy compressor / decompressor which consistently
achieves a 2:1 ratio for 8 bpp grayscale images:

#include <stdint.h>
#include <stdio.h>

int
compress(FILE *fin, FILE *fout)
{
uint8_t pin[2], pout;

for (;;) {
if (fread(&pin, sizeof pin, 1, fin) != 1)
return (ferror(fin) ? -1 : 0);
pout = (pin[0] + pin[1]) / 2;
if (fwrite(&pout, sizeof pout, 1, fout) != 1)
return -1;
}
}

int
decompress(FILE *fin, FILE *fout)
{
uint8_t pin, pout[2];

for (;;) {
if (fread(&pin, sizeof pin, 1, fin) != 1)
return (ferror(fin) ? -1 : 0);
pout[0] = pout[1] = pin;
if (fwrite(&pout, sizeof pout, 1, fout) != 1)
return -1;
}
}

(note that the code assumes that the size of the input stream is an
even number)

DES
--
Dag-Erling Smřrgrav - des@des.no
 
"Melanie Nasic" <quinn_the_esquimo@freenet.de> wrote in message
news:do9206$1m4$1@mamenchi.zrz.TU-Berlin.DE...
Hello community,

I am thinking about implementing a real-time compression scheme on an FPGA
working at about 500 Mhz. Facing the fact that there is no "universal
compression" algorithm that can compress data regardless of its structure
and statistics I assume compressing grayscale image data. The image data
is delivered line-wise, meaning that one horizontal line is processed,
than the next one, a.s.o.
Because of the high data rate I cannot spend much time on DFT or DCT and
on data modelling. What I am looking for is a way to compress the pixel
data in spatial not spectral domain because of latency aspects, processing
complexity, etc. Because of the sequential data transmission line by line
a block matching is also not possible in my opinion. The compression ratio
is not so important, factor 2:1 would be sufficient. What really matters
is the real time capability. The algorithm should be pipelineable and
fast. The memory requirements should not exceed 1 kb.
What "standard" compression schemes would you recommend? Are there
potentialities for a non-standard "own solution"?

Thank you for your comments.

Regards, Melanie
The answer, as always, is it all depends ...

Lossless compression (something like run length encoding) might work for
some kinds of image data (computer screens, rendered images), but will fail
for others (natural images etc).

Lossy compression will of course lose something from the image. The simplest
form is probably to average two adjacent pixels, giving you 2 to 1. I
suspect that anything more complex will exceed your space/speed/complexity
budget.

You need to spell out what type of images you are processing, what level of
'loss' is acceptable and why you need compression anyway (it will cause you
much pain !)

Dave
 
"Melanie Nasic" <quinn_the_esquimo@freenet.de> wrote in message
news:do9206$1m4$1@mamenchi.zrz.TU-Berlin.DE...
Hello community,

I am thinking about implementing a real-time compression scheme on an FPGA
working at about 500 Mhz. Facing the fact that there is no "universal
compression" algorithm that can compress data regardless of its structure
and statistics I assume compressing grayscale image data. The image data
is delivered line-wise, meaning that one horizontal line is processed,
than the next one, a.s.o.
Because of the high data rate I cannot spend much time on DFT or DCT and
on data modelling. What I am looking for is a way to compress the pixel
data in spatial not spectral domain because of latency aspects, processing
complexity, etc.
Are you hoping for lossless, or is lossy OK?

Because of the sequential data transmission line by line a block matching
is also not possible in my opinion. The compression ratio is not so
important, factor 2:1 would be sufficient. What really matters is the real
time capability. The algorithm should be pipelineable and fast. The memory
requirements should not exceed 1 kb.
That's 1024 bits, or bytes?
Is it enough for one line?
You don't say what your resolution and frame rate are.

What "standard" compression schemes would you recommend? Are there
potentialities for a non-standard "own solution"?
If you don't have a line of storage available, that restricts you a lot.
I don't understand this though. If you're using a 500MHz FPGA, it's
presumably recent, and presumably has a decent amount of storage.

How about a 1d predictor, non-linear quantizer and entropy coder?
If you have more memory available, look at JPEG-LS.
It can do lossless, or variable mild degrees of loss.
Thank you for your comments.

Regards, Melanie
 
Hi Derek,

I am sorry to dissapoint you but it's not a commercial or R&D science
project and not even a student project. We did a student project at the
university with FPGAs in general but what I am trying to do is more or less
personal interest.




<DerekSimmons@FrontierNet.net> schrieb im Newsbeitrag
news:1135088774.940472.174780@f14g2000cwb.googlegroups.com...
I attended Altera's DSP showcase/seminar in Rochester, there was a
large presence for interest in implementing MPEG4 (section 10 or 11, I
can't remember) for HDTV applications. You didn't say if the advice
you are searching for is for a commercial product, R&D science project
or a student project but even implementing something real time for DVD
quality (720x480) is worth considering.

I think a while back somebody announced the availability of JPEG
library on the www.opencores.org,
http://www.opencores.org/projects.cgi/web/jpeg/overview I haven't
tried it but critiquing it could be a good place to start. You could
incorporate parts of its implementation into yours, omit the parts you
don't agree with, and foster new not present in it. There is also a
section Video Compress Systems,
http://www.opencores.org/projects.cgi/web/video_systems/overview that
would be worth taking a look at.

A slightly different approach is using a tool like ImpulseC
(www.impulsec.com). It isn't really C but it is close. It allows you
to efficiently manage parallel systems of functions and integrate them
into VHDL and Verilog designs.

Maybe it is the bubble I have been living in, but your clock speed
seems high, what device families are you considering? I have seen 80
Mhz designs outperform similar applications on gigahertz PC. Don't
let PC marketing skew your objectivity (or maybe it is there choice in
operating systems).

Could you tell us more about the purpose of your project and you end
application?

Derek
 
Hi Pete,

I want the compression to be lossless and not based on perceptional
irrelevancy reductions. By stating 1 kb I meant 1024 bits and that's just
about half the line data. Your recommendation "1d predictor, non-linear
quantizer and entropy coder" sound interesting. COuld you please elaborate
on that? How is it done? Where can I find some exemplary codes? How can it
be achieved with hardware (VHDL sources?)

Thank you a lot.

Bye, Mel.



"Pete Fraser" <pfraser@covad.net> schrieb im Newsbeitrag
news:11qg5v3q9plph0a@news.supernews.com...
"Melanie Nasic" <quinn_the_esquimo@freenet.de> wrote in message
news:do9206$1m4$1@mamenchi.zrz.TU-Berlin.DE...
Hello community,

I am thinking about implementing a real-time compression scheme on an
FPGA working at about 500 Mhz. Facing the fact that there is no
"universal compression" algorithm that can compress data regardless of
its structure and statistics I assume compressing grayscale image data.
The image data is delivered line-wise, meaning that one horizontal line
is processed, than the next one, a.s.o.
Because of the high data rate I cannot spend much time on DFT or DCT and
on data modelling. What I am looking for is a way to compress the pixel
data in spatial not spectral domain because of latency aspects,
processing complexity, etc.

Are you hoping for lossless, or is lossy OK?

Because of the sequential data transmission line by line a block matching
is also not possible in my opinion. The compression ratio is not so
important, factor 2:1 would be sufficient. What really matters is the
real time capability. The algorithm should be pipelineable and fast. The
memory requirements should not exceed 1 kb.

That's 1024 bits, or bytes?
Is it enough for one line?
You don't say what your resolution and frame rate are.

What "standard" compression schemes would you recommend? Are there
potentialities for a non-standard "own solution"?

If you don't have a line of storage available, that restricts you a lot.
I don't understand this though. If you're using a 500MHz FPGA, it's
presumably recent, and presumably has a decent amount of storage.

How about a 1d predictor, non-linear quantizer and entropy coder?
If you have more memory available, look at JPEG-LS.
It can do lossless, or variable mild degrees of loss.

Thank you for your comments.

Regards, Melanie
 
"Melanie Nasic" <quinn_the_esquimo@freenet.de> wrote in message
news:do95r4$4hh$1@mamenchi.zrz.TU-Berlin.DE...
Hi Pete,

I want the compression to be lossless and not based on perceptional
irrelevancy reductions. By stating 1 kb I meant 1024 bits and that's just
about half the line data. Your recommendation "1d predictor, non-linear
quantizer and entropy coder" sound interesting. COuld you please elaborate
on that? How is it done? Where can I find some exemplary codes? How can it
be achieved with hardware (VHDL sources?)
I think you first need to play around with software and a few sample images.
The 1 d predictor means that you predict the next pixel in the sequence
by examining pixels on the left. A simple example would be to encode the
first pixel on the line, then use that as the prediction for the next pixel.
In that way you send only the difference between the predicted value
and what the pixel actually is. If you had enough memory to store a line you
could use a 2 d predictor, where you predict from pixels to the left and
pixels above.

Unfortunately, you can't use the non-linear quantizer as it's lossy.

I find Khalid Sayood's book "Introduction to Data Compression" quite good.
It comes with a link to a bunch of simple C code that has a variety of
predictors and entropy coders. You could try it on some sample images,
see how good compression you get, then go to hardware when you have
something acceptable.
 
If you can store a buffer of at least one extra scanline, you could try the
Paeth predictor + RLE. This will give reasonable prediction of the next
pixel's grayscale value, and if the prediction is OK, the result will often
contain a string of zeroes, and the RLE will do a good job.

If you can "afford it" (in other words, the FPGA is fast enough), you could
use arithmetic coding on the resulting predicted values, with a simple order
0 model, instead of RLE.

Paeth + RLE will do OK on computer generated images, but not on natural
images. Paeth + AC will do OK on both.

Both will fit in 1kb of code for sure.

Nils
 
On 20/12/2005 the venerable Melanie Nasic etched in runes:

Hello community,

I am thinking about implementing a real-time compression scheme on an
FPGA working at about 500 Mhz. Facing the fact that there is no
"universal compression" algorithm that can compress data regardless
of its structure and statistics I assume compressing grayscale image
data. The image data is delivered line-wise, meaning that one
horizontal line is processed, than the next one, a.s.o. Because of
the high data rate I cannot spend much time on DFT or DCT and on data
modelling. What I am looking for is a way to compress the pixel data
in spatial not spectral domain because of latency aspects, processing
complexity, etc. Because of the sequential data transmission line by
line a block matching is also not possible in my opinion. The
compression ratio is not so important, factor 2:1 would be
sufficient. What really matters is the real time capability. The
algorithm should be pipelineable and fast. The memory requirements
should not exceed 1 kb. What "standard" compression schemes would
you recommend? Are there potentialities for a non-standard "own
solution"?

Thank you for your comments.

Regards, Melanie
Have a look at Graphics File Formats by Kay & Levine (ISBN
0-07-034025-0). It will give you some ideas.

--
John B
 
I am thinking about implementing a real-time compression scheme on an FPGA
working at about 500 Mhz.
I'm currently working on something similar ...



simple predictive schemes (like the 4th predictor from jpeg or MED (from
jpeg-ls)) look promising ... they require storage of one line

entropy coding:
huffman and multilevel arithmetic coding require a lot of ressources
(that easily ends up above 1kbit even for a table of probabilities)
binary arithmetic coding would be able to code less than 1bit/cycle
(which ends up at >5 cyles/pixel which is too slow in my case)

I'm currently investigating different schemes of golomb-rice codes
(static, adaptive or even context-adaptive like in jpeg-ls) ... so far
they look promising ...


jpeg-ls: the concept looks nice and quite powerful for a pipelined
encoder (like for fpgas) - unfortunately the decoder would require a
large feedback-loop (and pipelining is almost impossible) ...
jpeg-ls is only symmetric for software implementations :-(



I'm still curious how you plan to achieve 500MHz even on a Virtex4
(I would say something like 200MHz could be possible)


bye,
Michael
 
Melanie Nasic wrote:
Hi Pete,

I want the compression to be lossless and not based on perceptional
irrelevancy reductions. By stating 1 kb I meant 1024 bits and that's just
about half the line data. Your recommendation "1d predictor, non-linear
quantizer and entropy coder" sound interesting. COuld you please elaborate
on that? How is it done? Where can I find some exemplary codes? How can it
be achieved with hardware (VHDL sources?)

Thank you a lot.

Bye, Mel.


Hi Mel,

you can calculate the delta of two subsequent pixels an then Huffman
code this result. This should archive almost 2:1 if there are not much
large brightness steps in the picture.

Regards
Thomas



"Pete Fraser" <pfraser@covad.net> schrieb im Newsbeitrag
news:11qg5v3q9plph0a@news.supernews.com...

"Melanie Nasic" <quinn_the_esquimo@freenet.de> wrote in message
news:do9206$1m4$1@mamenchi.zrz.TU-Berlin.DE...

Hello community,

I am thinking about implementing a real-time compression scheme on an
FPGA working at about 500 Mhz. Facing the fact that there is no
"universal compression" algorithm that can compress data regardless of
its structure and statistics I assume compressing grayscale image data.
The image data is delivered line-wise, meaning that one horizontal line
is processed, than the next one, a.s.o.
Because of the high data rate I cannot spend much time on DFT or DCT and
on data modelling. What I am looking for is a way to compress the pixel
data in spatial not spectral domain because of latency aspects,
processing complexity, etc.

Are you hoping for lossless, or is lossy OK?


Because of the sequential data transmission line by line a block matching
is also not possible in my opinion. The compression ratio is not so
important, factor 2:1 would be sufficient. What really matters is the
real time capability. The algorithm should be pipelineable and fast. The
memory requirements should not exceed 1 kb.

That's 1024 bits, or bytes?
Is it enough for one line?
You don't say what your resolution and frame rate are.


What "standard" compression schemes would you recommend? Are there
potentialities for a non-standard "own solution"?

If you don't have a line of storage available, that restricts you a lot.
I don't understand this though. If you're using a 500MHz FPGA, it's
presumably recent, and presumably has a decent amount of storage.

How about a 1d predictor, non-linear quantizer and entropy coder?
If you have more memory available, look at JPEG-LS.
It can do lossless, or variable mild degrees of loss.

Thank you for your comments.

Regards, Melanie
 

Welcome to EDABoard.com

Sponsor

Back
Top