What Type of Decimation Algorithm Does LabVIEW Use to Display Data on a Graph?
Primary Software Version: 1.0
Primary Software Fixed Version: N/A
Secondary Software: LabVIEW Development Systems>>LabVIEW Full Development System
The average LabVIEW graph is on the order of 300 to 1000 pixels wide, but LabVIEW can accurately display a data set containing over a million points. What type of decimation algorithm does LabVIEW use to display more points than there are pixels?
LabVIEW uses a max-min decimation algorithm to display data on a graph. Max-min decimation is decimation in which the maximum and minimum data points of each decimation interval are used to provide the decimation. Simple decimation uses the first point of each decimation interval for the data point of the decimation interval. Simple decimation leads to aliasing artifacts, so it should not be used unless time is of the utmost importance and accuracy is not important.
Figure 1 shows an example of max-min decimation. If you process the data in the left-hand image below using max-min decimation, it will produce the graph displayed on the right:
Figure 1. Max-Min Decimation
The max-min algorithm assures you always see the peaks of the data, giving you the solid band the high frequency sine wave in Figure 1 should produce. This all occurs with much less data plotted to the graph, resulting in much higher speeds.
NI Developer Zone: Managing Large Data Sets in LabVIEW
Report Date: 07/29/2009
Last Updated: 08/24/2009
Document ID: 4ZSFIO3S