- Linux 下使用 QT 调用 opencv 读取摄像头视频 调试心得
- Android 开发 摄像头 SurfaceView 预览 背景带矩形框 实现
- Android 开发:安装 NDK,移植 OpenCV2.3.1,JNI 调用 OpenCV 全过程
- try to load OpenCV.mk from default install location
- Android 摄像头开发完美 demo
- 如何设置 ImageButton 按键按下去后的 特效
- Android 摄像头:只拍摄 SurfaceView 预览界面特定区域内容(矩形框)
- Android 开发:SurfaceView 上新建线程绘制旋转图片 及 刷新特定区域(脏矩形)
- Android 开发:ImageView 上绘制旋转圆环(透明度不同的旋转圆环,利用 canvas.drawArc 实现)
- Android 上掌纹识别第一步:基于 OpenCV 的 6 种肤色分割 源码和效果图
- Android 开发:实时处理摄像头预览帧视频 - 浅析
- Android 摄像头开发:拍照后添加相框,融合相框和图片为一副 图片
- Android(OpenCV) NDK 开发: 0xdeadbaad(code=1) 错误 及 关闭 armeabi 和 libnative_camera_r2.2.2.so 的生成
- Android 摄像头开发:实时摄像头视频预览帧的编码问题(二)
- setContentView 切换页面(无需每次都 findViewById) - 二
- Android 开发:setContentView 切换界面,自定义带 CheckBox 的 ListView 显示 SQlite 条目 - 实现
Android 摄像头开发:实时摄像头视频预览帧的编码问题(二)
Android 开发:实时处理摄像头预览帧视频------浅析 PreviewCallback,onPreviewFrame,AsyncTask 的综合应用 这里将大致框架介绍了,但很多人对 onPreviewFrame()里的处理提出质疑。认为下面的转换是多余的:
final YuvImage image = new YuvImage(mData, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream os = new ByteArrayOutputStream(mData.length);
if(!image.compressToJpeg(new Rect(0, 0, w, h), 100, os)){
return null;
}
byte[] tmp = os.toByteArray();
Bitmap bmp = BitmapFactory.decodeByteArray(tmp, 0,tmp.length);
因为这个 mData 是 byte[ ]格式,转换流程是:byte[ ]---YuvImage----ByteArrayOutputStream---byte[ ]-----Bitmap。乍一看这个转换还真是多余了。看看看 goolge 的 api:
public abstract void onPreviewFrame (byte[] data, Camera camera)
Added in API level 1
Called as preview frames are displayed. This callback is invoked on the event thread open(int) was called from.
If using the YV12 format, refer to the equations in setPreviewFormat(int) for the arrangement of the pixel data in the preview callback buffers.
Parameters
data the contents of the preview frame in the format defined by ImageFormat, which can be queried with getPreviewFormat(). If setPreviewFormat(int) is never called, the default will be the YCbCr_420_SP (NV21) format.
camera the Camera service object.
大致意思是:可以用 getPreviewFormat()
查询 支持的预览帧格式。如果 setPreviewFormat(INT)
从未被调用,默认将使用 YCbCr_420_SP 的格式(NV21)。
setPreviewFormat 里,它又说:
public void setPreviewFormat (int pixel_format)
Added in API level 1
Sets the image format for preview pictures.
If this is never called, the default format will be NV21, which uses the NV21 encoding format.
Use getSupportedPreviewFormats() to get a list of the available preview formats.
It is strongly recommended that either NV21 or YV12 is used, since they are supported by all camera devices.
For YV12, the image buffer that is received is not necessarily tightly packed, as there may be padding at the end of each row of pixel data, as described in YV12. For camera callback data, it can be assumed that the stride of the Y and UV data is the smallest possible that meets the alignment requirements. That is, if the preview size is width x height, then the following equations describe the buffer index for the beginning of row y for the Y plane and row c for the U and V planes:
yStride = (int) ceil(width / 16.0) * 16;
uvStride = (int) ceil( (yStride / 2) / 16.0) * 16;
ySize = yStride * height;
uvSize = uvStride * height / 2;
yRowIndex = yStride * y;
uRowIndex = ySize + uvSize + uvStride * c;
vRowIndex = ySize + uvStride * c;
size = ySize + uvSize * 2;
强烈建议使用 NV21 格式和 YV21 格式,而默认情况下是 NV21 格式,也就是 YUV420SP 的。因此不经过转换,直接用 BitmapFactory 解析是不能成功的。事实也是如此。直接解析 mData 将会得到如下的错误:
另外下面也提到 NV21 是通用的。
getSupportedPreviewFormats ()
Added in API level 5
Gets the supported preview formats. [NV21](http://developer.android.com/reference/android/graphics/ImageFormat.html#NV21)
is always supported. [YV12](http://developer.android.com/reference/android/graphics/ImageFormat.html#YV12)
is always supported since API level 12.
如果嫌 YuvImage 进行压缩解析的慢,只能自己写转换函数了,网上常见的有三种:
一:这里只是一个编码框架
参考这里:Android 实时视频采集—Camera 预览采集
// 【获取视频预览帧的接口】
mJpegPreviewCallback = new Camera.PreviewCallback()
{
@Override
public void onPreviewFrame(byte[] data, Camera camera)
{
//传递进来的 data,默认是 YUV420SP 的
// TODO Auto-generated method stub
try
{
Log.i(TAG, "going into onPreviewFrame");
//mYUV420sp = data; // 获取原生的 YUV420SP 数据
YUVIMGLEN = data.length;
// 拷贝原生 yuv420sp 数据
mYuvBufferlock.acquire();
System.arraycopy(data, 0, mYUV420SPSendBuffer, 0, data.length);
//System.arraycopy(data, 0, mWrtieBuffer, 0, data.length);
mYuvBufferlock.release();
// 开启编码线程,如开启 PEG 编码方式线程
mSendThread1.start();
} catch (Exception e)
{
Log.v("System.out", e.toString());
}// endtry
}// endonPriview
};
二、下面是将 yuv420sp 转成 rgb 参考这里: android 视频采集
private void updateIM() {
try {
// 解析 YUV 成 RGB 格式
decodeYUV420SP(byteArray, yuv420sp, width, height);
DataBuffer dataBuffer = new DataBufferByte(byteArray, numBands);
WritableRaster wr = Raster.createWritableRaster(sampleModel,
dataBuffer, new Point(0, 0));
im = new BufferedImage(cm, wr, false, null);
} catch (Exception ex) {
ex.printStackTrace();
}
}
private static void decodeYUV420SP(byte[] rgbBuf, byte[] yuv420sp,
int width, int height) {
final int frameSize = width * height;
if (rgbBuf == null)
throw new NullPointerException("buffer 'rgbBuf' is null");
if (rgbBuf.length < frameSize * 3)
throw new IllegalArgumentException("buffer 'rgbBuf' size "
+ rgbBuf.length + " < minimum " + frameSize * 3);
if (yuv420sp == null)
throw new NullPointerException("buffer 'yuv420sp' is null");
if (yuv420sp.length < frameSize * 3 / 2)
throw new IllegalArgumentException("buffer 'yuv420sp' size "
+ yuv420sp.length + " < minimum " + frameSize * 3 / 2);
int i = 0, y = 0;
int uvp = 0, u = 0, v = 0;
int y1192 = 0, r = 0, g = 0, b = 0;
for (int j = 0, yp = 0; j < height; j++) {
uvp = frameSize + (j >> 1) * width;
u = 0;
v = 0;
for (i = 0; i < width; i++, yp++) {
y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
y1192 = 1192 * y;
r = (y1192 + 1634 * v);
g = (y1192 - 833 * v - 400 * u);
b = (y1192 + 2066 * u);
if (r < 0)
r = 0;
else if (r > 262143)
r = 262143;
if (g < 0)
g = 0;
else if (g > 262143)
g = 262143;
if (b < 0)
b = 0;
else if (b > 262143)
b = 262143;
rgbBuf[yp * 3] = (byte) (r >> 10);
rgbBuf[yp * 3 + 1] = (byte) (g >> 10);
rgbBuf[yp * 3 + 2] = (byte) (b >> 10);
}
}
}
public static void main(String[] args) {
Frame f = new FlushMe();
}
}
三、将 YUV420SP 转成 YUV420 格式
参考这里: Android 如何实现边采集边上传
private byte[] changeYUV420SP2P(byte[]data,int length){
int width = 176;
int height = 144;
byte[] str = new byte[length];
System.arraycopy(data, 0, str, 0,width*height);
int strIndex = width*height;
for(int i = width*height+1; i < length ;i+=2){
str[strIndex++] = data[i];
}
for(int i = width*height;i<length;i+=2){
str[strIndex++] = data[i];
}
return str;
}
至于怎么从 YUV420SP 中直接提取出 Y 分量进行后续检测,这个还要研究一番。有知道的大神多赐教。
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论