hi, i have a pixel link camera which has built in api used for .net programming.
the function captures an image from the camera using c#
IT GENERATES A byte[] dstBuf OF SIZE FOR THE SPECIFIC PICTURE FORMAT. WHEN I HAVE THE FORMAT AS 200X200 FOR A BMP IT SHOULD BE BYTE[40000] HOWEVER IT GENEARTES A SIZE OF 41078
// Save the data to a binary file
FileStream fStream = new FileStream(filename, FileMode.OpenOrCreate);
BinaryWriter bw = new BinaryWriter(fStream);
bw.Write(dstBuf);
bw.Close();
fStream.Close();
this code writes the byte array to a image.
when i convert this image to a 2,2 array with a image that is completely black it gives me all 0's which is correct because 0 is black in a 0-255 range. (picture is 8bit monochrome blacknwhite).
what iwanted to do was go directly from caturing the image to the
dstbyte[] array to a 2x2 array. however when i look inside this array it contains values from 0-255?
so my question is. does a bmp have a header section saved in the byte array which prevent it to be exactly size 40000 (200x200) and why does this byte[] store data value of not all 0's when the iamge is all black? is there some type of conversion from a byte array to a interger value that represent the color0-255 scale? (shades of black and white since mono 8 bit)
}
this is the API function to capture an image
public byte[] GetSnapshot(ImageFormat imageFormat, string filename)
{
//in this example imageformat can be jpeg,bmp,rawrbg24, and tiff
//jpeg gives me a size so 19xxx
//bmp give a size of 41078
//did not try tiff
//rawrgb24 gives 120000 which is 40000x3
//unfortunetly there is no format for raw8bt
int rawImageSize = DetermineRawImageSize();
byte[] buf = new byte[rawImageSize];
Api.SetStreamState(m_hCamera, StreamState.Start);
FrameDescriptor frameDesc = new FrameDescriptor();
ReturnCode rc = Api.GetNextFrame(m_hCamera, buf.Length, buf, ref frameDesc);
Api.SetStreamState(m_hCamera, StreamState.Stop);
// How big a buffer do we need for the converted image?
int destBufferSize = 0;
rc = Api.FormatImage(buf, ref frameDesc, imageFormat, null, ref destBufferSize);
byte[] dstBuf = new byte[destBufferSize];
rc = Api.FormatImage(buf, ref frameDesc, imageFormat, dstBuf, ref destBufferSize);
// Save the data to a binary file
FileStream fStream = new FileStream(filename, FileMode.OpenOrCreate);
BinaryWriter bw = new BinaryWriter(fStream);
bw.Write(dstBuf);
bw.Close();
fStream.Close();
return dstBuf;
}
the function captures an image from the camera using c#
IT GENERATES A byte[] dstBuf OF SIZE FOR THE SPECIFIC PICTURE FORMAT. WHEN I HAVE THE FORMAT AS 200X200 FOR A BMP IT SHOULD BE BYTE[40000] HOWEVER IT GENEARTES A SIZE OF 41078
// Save the data to a binary file
FileStream fStream = new FileStream(filename, FileMode.OpenOrCreate);
BinaryWriter bw = new BinaryWriter(fStream);
bw.Write(dstBuf);
bw.Close();
fStream.Close();
this code writes the byte array to a image.
when i convert this image to a 2,2 array with a image that is completely black it gives me all 0's which is correct because 0 is black in a 0-255 range. (picture is 8bit monochrome blacknwhite).
what iwanted to do was go directly from caturing the image to the
dstbyte[] array to a 2x2 array. however when i look inside this array it contains values from 0-255?
so my question is. does a bmp have a header section saved in the byte array which prevent it to be exactly size 40000 (200x200) and why does this byte[] store data value of not all 0's when the iamge is all black? is there some type of conversion from a byte array to a interger value that represent the color0-255 scale? (shades of black and white since mono 8 bit)
}
this is the API function to capture an image
public byte[] GetSnapshot(ImageFormat imageFormat, string filename)
{
//in this example imageformat can be jpeg,bmp,rawrbg24, and tiff
//jpeg gives me a size so 19xxx
//bmp give a size of 41078
//did not try tiff
//rawrgb24 gives 120000 which is 40000x3
//unfortunetly there is no format for raw8bt
int rawImageSize = DetermineRawImageSize();
byte[] buf = new byte[rawImageSize];
Api.SetStreamState(m_hCamera, StreamState.Start);
FrameDescriptor frameDesc = new FrameDescriptor();
ReturnCode rc = Api.GetNextFrame(m_hCamera, buf.Length, buf, ref frameDesc);
Api.SetStreamState(m_hCamera, StreamState.Stop);
// How big a buffer do we need for the converted image?
int destBufferSize = 0;
rc = Api.FormatImage(buf, ref frameDesc, imageFormat, null, ref destBufferSize);
byte[] dstBuf = new byte[destBufferSize];
rc = Api.FormatImage(buf, ref frameDesc, imageFormat, dstBuf, ref destBufferSize);
// Save the data to a binary file
FileStream fStream = new FileStream(filename, FileMode.OpenOrCreate);
BinaryWriter bw = new BinaryWriter(fStream);
bw.Write(dstBuf);
bw.Close();
fStream.Close();
return dstBuf;
}