在Tensorflow中的图像中应用转换模型(数据增强)
我是与Python的Tensorflow的一些顺序模型中的新手。我有一个类似于下面的转换顺序模型。它随机适用于给定的图像输入某些具有随机参数的操作。
import tensorflow as tf
from tensorflow.keras import layers
data_transformation = tf.keras.Sequential(
[
layers.Lambda(lambda x: my_random_brightness(x, 1, 20)))
layers.GaussianNoise(stddev=tf.random.uniform(shape=(), minval=0, maxval=1)),
layers.experimental.preprocessing.RandomRotation(
factor=0.01,
fill_mode="reflect",
interpolation="bilinear",
seed=None,
name=None,
fill_value=0.0,
),
layers.experimental.preprocessing.RandomZoom(
height_factor=(0.1, 0.2),
width_factor=(0.1, 0.2),
fill_mode="reflect",
interpolation="bilinear",
seed=None,
name=None,
fill_value=0.0,
),
]
)
该模型中还有一个lambda函数,该函数在下面如下定义,
def my_random_brightness(
image_to_be_transformed, brightness_factor_min, brightness_factor_max
):
# build the brightness factor
selected_brightness_factor = tf.random.uniform(
(), minval=brightness_factor_min, maxval=brightness_factor_max
)
c0 = image_to_be_transformed[:, :, :, 0] + selected_brightness_factor
c1 = image_to_be_transformed[:, :, :, 1] + selected_brightness_factor
c2 = image_to_be_transformed[:, :, :, 2] + selected_brightness_factor
image_to_be_transformed = tf.concat(
[c0[..., tf.newaxis], image_to_be_transformed[:, :, :, 1:]], axis=-1
)
image_to_be_transformed = tf.concat(
[
image_to_be_transformed[:, :, :, 0][..., tf.newaxis],
c1[..., tf.newaxis],
image_to_be_transformed[:, :, :, 2][..., tf.newaxis],
],
axis=-1,
)
image_to_be_transformed = tf.concat(
[image_to_be_transformed[:, :, :, :2], c2[..., tf.newaxis]], axis=-1
)
return image_to_be_transformed
假设我想将这样的模型应用于仅包含一个图像的一批随机操作,我想查看并保存结果。那怎么做?是否有任何预测()或flow()像输出这样的结果的功能?
编辑:我尝试了result = data_transformation(image)
,我有以下错误:
tensorflow.python.framework.errors_impl.invalidargumenterror:index 使用输入DIM 3超出范围;输入只有3个DIM [OP:Stridslice]名称:sequential/lambda/strided_slice/
I am a newbie in some sequential models in Tensorflow with Python. I have a transformation sequential model like the one below. It applies randomly to a given image input some operations with random parameters.
import tensorflow as tf
from tensorflow.keras import layers
data_transformation = tf.keras.Sequential(
[
layers.Lambda(lambda x: my_random_brightness(x, 1, 20)))
layers.GaussianNoise(stddev=tf.random.uniform(shape=(), minval=0, maxval=1)),
layers.experimental.preprocessing.RandomRotation(
factor=0.01,
fill_mode="reflect",
interpolation="bilinear",
seed=None,
name=None,
fill_value=0.0,
),
layers.experimental.preprocessing.RandomZoom(
height_factor=(0.1, 0.2),
width_factor=(0.1, 0.2),
fill_mode="reflect",
interpolation="bilinear",
seed=None,
name=None,
fill_value=0.0,
),
]
)
There is also a lambda function in this model that define as below
def my_random_brightness(
image_to_be_transformed, brightness_factor_min, brightness_factor_max
):
# build the brightness factor
selected_brightness_factor = tf.random.uniform(
(), minval=brightness_factor_min, maxval=brightness_factor_max
)
c0 = image_to_be_transformed[:, :, :, 0] + selected_brightness_factor
c1 = image_to_be_transformed[:, :, :, 1] + selected_brightness_factor
c2 = image_to_be_transformed[:, :, :, 2] + selected_brightness_factor
image_to_be_transformed = tf.concat(
[c0[..., tf.newaxis], image_to_be_transformed[:, :, :, 1:]], axis=-1
)
image_to_be_transformed = tf.concat(
[
image_to_be_transformed[:, :, :, 0][..., tf.newaxis],
c1[..., tf.newaxis],
image_to_be_transformed[:, :, :, 2][..., tf.newaxis],
],
axis=-1,
)
image_to_be_transformed = tf.concat(
[image_to_be_transformed[:, :, :, :2], c2[..., tf.newaxis]], axis=-1
)
return image_to_be_transformed
Just now suppose that I would like to apply such a model to output such random operations in one batch containing just one image and I would like to see and save the result. How is that possible to do that? is there any predict() or flow() like function to output such a result?
EDIT: I tried result=data_transformation(image)
and I have the following error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Index
out of range using input dim 3; input has only 3 dims
[Op:StridedSlice] name: sequential/lambda/strided_slice/
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
除了亮度处理层的正确性(上图)之外,还编码了一批图像而不是单个图像。这就是它给出预期错误的原因。在这种情况下,您应该在传递单个图像时添加批处理轴。它应该起作用。
另外,在自定义层实现中,尝试始终采用子分类方式。
关于此实施的正确性,我没有严格检查。我建议使用
randy_brightness
a href =“ https://www.tensorflow.org/api_docs/python/tf/image/adjust_brightness” rel =“ nofollow noreferrer”>aption> aption_brightness
来自正式实施。或者,如果您正在使用Tensorflow2.9
,请向新的 kerascv ,我们可以找到Randombrightness
layers。Apart from the correctness of the brightness processing layer (above), it's coded to take a batch of images and not a single image. That's the reason it gives the expected error. You should add a batch axis while passing a single image in this case. It should work.
Also, in custom layer implementation, try always to adopt subclassing way.
About the correctness of this implementation, I didn't check rigorously. I would suggest using
random_brightness
oradjust_brightness
from the official implementation. Or if you're usingtensorflow2.9
, say hello to the new KerasCV, there we can findRandomBrightness
layers.