在 FastAPI 中渲染 NumPy 数组

发布于 01-16 16:31 字数 494 浏览 1 评论 0 原文

我发现 如何使用 FastAPI 将 numpy 数组作为图像返回? 然而,我仍然在努力显示图像,它看起来只是一个白色方块。

我将一个数组读入 io.BytesIO ,如下所示:

def iterarray(array):
    output = io.BytesIO()
    np.savez(output, array)
    yield output.get_value()

在我的端点中,我的返回是 StreamingResponse(iterarray(), media_type='application/octet-stream')

当我将 media_type 留空以推断下载了 zip 文件。

如何将数组显示为图像?

I have found How to return a numpy array as an image using FastAPI?, however, I am still struggling to show the image, which appears just as a white square.

I read an array into io.BytesIO like so:

def iterarray(array):
    output = io.BytesIO()
    np.savez(output, array)
    yield output.get_value()

In my endpoint, my return is StreamingResponse(iterarray(), media_type='application/octet-stream')

When I leave the media_type blank to be inferred a zipfile is downloaded.

How do I get the array to be displayed as an image?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

ヤ经典坏疍 2025-01-23 16:31:27

选项 1 - 以字节形式返回图像

下面的示例展示了如何将从磁盘加载的图像或内存中的图像(以 numpy 数组的形式)转换为字节(使用 PIL 或 < code>OpenCV 库)并使用自定义 直接响应。出于本演示的目的,以下代码用于创建内存中示例图像(numpy 数组),该图像基于此回答

# Function to create a sample RGB image
def create_img():
    w, h = 512, 512
    arr = np.zeros((h, w, 3), dtype=np.uint8)
    arr[0:256, 0:256] = [255, 0, 0] # red patch in upper left
    return arr

使用 PIL

服务器端:

您可以使用 Image.open 从磁盘加载图像,或使用 Image.fromarray 加载内存中的图像(注意:出于演示目的,当该案例正在从磁盘加载图像,如下演示了路由内的操作。但是,如果要多次提供同一个图像,则只能在 startup将其存储到 app 实例,如"="">这个答案和这个答案)。接下来,将图像写入缓冲流,即 BytesIO ,并使用 getvalue() 方法获取缓冲区的全部内容。尽管缓冲流在超出范围时会被垃圾收集,但通常更好的是调用 close() 或使用with 语句,如此处和下面的示例所示。

from fastapi import Response
from PIL import Image
import numpy as np
import io

@app.get('/image', response_class=Response)
def get_image():
    # loading image from disk
    # im = Image.open('test.png')
    
    # using an in-memory image
    arr = create_img()
    im = Image.fromarray(arr)
    
    # save image to an in-memory bytes buffer
    with io.BytesIO() as buf:
        im.save(buf, format='PNG')
        im_bytes = buf.getvalue()
        
    headers = {'Content-Disposition': 'inline; filename="test.png"'}
    return Response(im_bytes, headers=headers, media_type='image/png')

客户端:

下面演示如何使用Python requests模块向上述端点发送请求,并将接收到的字节写入文件,或将字节转换回PIL Image,如所述="https://stackoverflow.com/a/32908899/17865804">这里

import requests
from PIL import Image

url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url)

# write raw bytes to file
with open('test.png', 'wb') as f:
    f.write(r.content)

# or, convert back to PIL Image
# im = Image.open(io.BytesIO(r.content))
# im.save('test.png') 

使用 OpenCV

服务器端:

您可以使用 < 从磁盘加载图像code>cv2.imread() 函数,或者使用内存中的图像,如果它是 RGB 顺序,如下例所示,则需要转换,如 OpenCV 使用 BGR 作为图像的默认颜色顺序。接下来,使用 cv2.imencode() 函数,它压缩图像数据(基于您传递的文件扩展名)定义输出格式,即 .png.jpg 等)并将其存储在内存缓冲区中,该缓冲区用于通过网络传输数据。

import cv2

@app.get('/image', response_class=Response)
def get_image():
    # loading image from disk
    # arr = cv2.imread('test.png', cv2.IMREAD_UNCHANGED)
    
    # using an in-memory image
    arr = create_img()
    arr = cv2.cvtColor(arr, cv2.COLOR_RGB2BGR)
    # arr = cv2.cvtColor(arr, cv2.COLOR_RGBA2BGRA) # if dealing with 4-channel RGBA (transparent) image

    success, im = cv2.imencode('.png', arr)
    headers = {'Content-Disposition': 'inline; filename="test.png"'}
    return Response(im.tobytes(), headers=headers, media_type='image/png')

客户端:

在客户端,您可以将原始字节写入文件,或使用 "="">numpy.frombuffer() 函数和 cv2.imdecode()函数将缓冲区解压缩为图像格式(类似于this) - cv2.imdecode() 不需要文件扩展名,因为正确的编解码器将是从缓冲区中压缩图像的第一个字节推导出来。

url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url) 

# write raw bytes to file
with open('test.png', 'wb') as f:
    f.write(r.content)

# or, convert back to image format    
# arr = np.frombuffer(r.content, np.uint8)
# img_np = cv2.imdecode(arr, cv2.IMREAD_UNCHANGED)
# cv2.imwrite('test.png', img_np)

有用信息

既然您注意到您希望显示的图像类似于 FileResponse ,使用自定义 Response 返回字节应该是执行此操作的方法,而不是使用 StreamingResponse (如您的问题所示)。要指示应在浏览器中查看图像,HTTP 响应应包含以下Content-Disposition 标头,如所述 此处 并如上面的示例所示(如果 文件名 两边的引号是必需的, >文件名包含特殊字符):

headers = {'Content-Disposition': 'inline; filename="test.png"'}

然而,要下载图像而不是查看图像(使用附件而不是inline):

headers = {'Content-Disposition': 'attachment; filename="test.png"'}

如果您想使用 JavaScript 接口(例如 Fetch API 或 Axios)显示(或下载)图像,请查看答案 此处此处

对于StreamingResponse,如果整个 numpy 数组/图像已加载到内存中,根本不需要 StreamingResponse (当然,不应该是将已加载到内存中的数据返回到这 客户)。 StreamingResponse 通过迭代 iter() 函数提供的块来进行流式传输。如StreamingResponse的实现所示class,如果您传递的迭代器/生成器不是AsyncIterable,来自外部线程池的线程 - 有关该线程池的更多详细信息,请参阅此答案 -将使用 Starlette 的 ="">同步迭代器href="https://github.com/encode/starlette/blob/827de43b97ad58c7b8daae12ad17c1848a4f1341/starlette/concurrency.py#L54" rel="nofollow noreferrer">iterate_in_threadpool() 函数,以避免阻塞事件循环。还应该注意的是,使用 StreamingResponse设置 Content-Length 响应标头(这是有道理的,因为 StreamingResponse< /code> 应该在您事先不知道响应大小的情况下使用),与 FastAPI/Starlette 的其他 Response 类不同,它们为您设置该标头,以便浏览器知道数据结束的地方。应保持这种方式,就好像包含 Content-Length 标头(其值应与整体响应正文大小(以字节为单位)匹配),然后发送到服务器 StreamingResponse 看起来与 Response 相同,因为在这种情况下服务器不会使用 transfer-encoding: chunked (即使在应用程序级别两者仍然不同) -看看Uvicorn 有关响应标头的文档MDN 有关 Transfer-Encoding 的文档:分块以获取更多详细信息。即使您事先知道主体大小,但仍然需要使用 StreamingResponse,因为它允许您通过指定您选择的块大小来加载和传输数据,这与 FileResponse 不同。 /code> (请参阅稍后的详细信息),您应该确保自行设置 Content-Length 标头,例如 StreamingResponse(iterfile() , headers={'内容长度': str(content_length)}),因为这会导致服务器使用transfer-encoding: chunked(无论应用程序将数据传递到网络)服务器以块的形式,如 相关实现)。

此答案中所述:

当您不知道数据的大小时,分块传输编码是有意义的
提前输出,并且您不想等待收集它
在开始将其发送给客户之前,请先了解所有内容。那可以
适用于提供慢速数据库查询结果之类的东西,但是
通常不适用于提供图像

即使您想流式传输保存在磁盘上的图像文件,类文件对象,例如由open()创建的对象,是普通迭代器;因此,您可以直接在 StreamingResponse 中返回它们,如 文档 并如下所示(如果您发现 yield from f 相当慢,当使用StreamingResponse,请查看这个答案,了解如何使用以下命令分块读取文件您选择的块大小 - 应根据您的需求以及服务器的资源进行设置)。应该注意的是,使用 FileResponse 也会将文件内容以块的形式读入内存,而不是一次读取整个内容。但是,如 FileResponse 的实现所示 class,使用的块大小是预先定义的并设置为64KB。因此,根据需求,他们应该决定使用两个 Response 类中的哪一个。

@app.get('/image')
def get_image():
    def iterfile():  
        with open('test.png', mode='rb') as f:  
            yield from f  
            
    return StreamingResponse(iterfile(), media_type='image/png')

或者,如果图像已经加载到内存中,然后保存到 BytesIO 缓冲流,因为 BytesIO 是一个 "="">类文件对象(就像io 模块),您可以直接在 StreamingResponse 中返回它(或者,最好简单地调用 buf.getvalue() 来获取整个图像字节并使用自定义返回它们直接响应,如前所示)。如果返回缓冲流,如下例所示,请记住调用 buf.seek(0),以便将光标倒回到缓冲区的开头,并调用close() 在后台任务中,以便在响应发送到客户端后丢弃缓冲区。

from fastapi import BackgroundTasks

@app.get('/image')
def get_image(background_tasks: BackgroundTasks):
    # supposedly, the buffer already existed in memory
    arr = create_img()
    im = Image.fromarray(arr)
    buf = BytesIO()
    im.save(buf, format='PNG')

    # rewind the cursor to the start of the buffer
    buf.seek(0)
    # discard the buffer, after the response is returned
    background_tasks.add_task(buf.close)
    return StreamingResponse(buf, media_type='image/png')

因此,在您的情况下,最适合的方法是返回自定义 直接响应,包括您的自定义内容media_type,以及如前所述,设置 Content-Disposition 标头,以便在浏览器中查看图像。

选项 2 - 将图像返回为 JSON 编码的 numpy 数组

下面不应该用于在浏览器中显示图像,但为了完整起见,在此处添加了它,展示了如何将图像转换为 numpy 数组(最好是, 使用 asarray() 函数),然后 以JSON格式返回数据,最后在客户端将数据转换回图像,如这个这个答案。如需标准 Python json 库的更快替代方案,请参阅此答案

使用 PIL

服务器端:

from PIL import Image
import numpy as np
import json

@app.get('/image')
def get_image():
    im = Image.open('test.png')
    # im = Image.open('test.png').convert('RGBA') # if dealing with 4-channel RGBA (transparent) image 
    arr = np.asarray(im)
    return json.dumps(arr.tolist())

客户端:

import requests
from PIL import Image
import numpy as np
import json

url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url) 
arr = np.asarray(json.loads(r.json())).astype(np.uint8)
im = Image.fromarray(arr)
im.save('test_received.png')

使用 OpenCV

服务器端:

import cv2
import json

@app.get('/image')
def get_image():
    arr = cv2.imread('test.png', cv2.IMREAD_UNCHANGED)
    return json.dumps(arr.tolist())

客户端:

import requests
import numpy as np
import cv2
import json

url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url) 
arr = np.asarray(json.loads(r.json())).astype(np.uint8)
cv2.imwrite('test_received.png', arr)

Option 1 - Return image as bytes

The below examples show how to convert an image loaded from disk, or an in-memory image (in the form of numpy array), into bytes (using either PIL or OpenCV libraries) and return them using a custom Response directly. For the purposes of this demo, the below code is used to create the in-memory sample image (numpy array), which is based on this answer.

# Function to create a sample RGB image
def create_img():
    w, h = 512, 512
    arr = np.zeros((h, w, 3), dtype=np.uint8)
    arr[0:256, 0:256] = [255, 0, 0] # red patch in upper left
    return arr

Using PIL

Server side:

You can load an image from disk using Image.open, or use Image.fromarray to load an in-memory image (Note: For demo purposes, when the case is loading the image from disk, the below demonstrates that operation inside the route. However, if the same image is going to be served multiple times, one could load the image only once at startup and store it to the app instance, as described in this answer and this answer). Next, write the image to a buffered stream, i.e., BytesIO, and use the getvalue() method to get the entire contents of the buffer. Even though the buffered stream is garbage collected when goes out of scope, it is generally better to call close() or use the with statement, as shown here and in the example below.

from fastapi import Response
from PIL import Image
import numpy as np
import io

@app.get('/image', response_class=Response)
def get_image():
    # loading image from disk
    # im = Image.open('test.png')
    
    # using an in-memory image
    arr = create_img()
    im = Image.fromarray(arr)
    
    # save image to an in-memory bytes buffer
    with io.BytesIO() as buf:
        im.save(buf, format='PNG')
        im_bytes = buf.getvalue()
        
    headers = {'Content-Disposition': 'inline; filename="test.png"'}
    return Response(im_bytes, headers=headers, media_type='image/png')

Client side:

The below demonstrates how to send a request to the above endpoint using Python requests module, and write the received bytes to a file, or convert the bytes back into PIL Image, as described here.

import requests
from PIL import Image

url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url)

# write raw bytes to file
with open('test.png', 'wb') as f:
    f.write(r.content)

# or, convert back to PIL Image
# im = Image.open(io.BytesIO(r.content))
# im.save('test.png') 

Using OpenCV

Server side:

You can load an image from disk using cv2.imread() function, or use an in-memory image, which—if it is in RGB order, as in the example below—needs to be converted, as OpenCV uses BGR as its default colour order for images. Next, use cv2.imencode() function, which compresses the image data (based on the file extension you pass that defines the output format, i.e., .png, .jpg, etc.) and stores it in an in-memory buffer that is used to transfer the data over the network.

import cv2

@app.get('/image', response_class=Response)
def get_image():
    # loading image from disk
    # arr = cv2.imread('test.png', cv2.IMREAD_UNCHANGED)
    
    # using an in-memory image
    arr = create_img()
    arr = cv2.cvtColor(arr, cv2.COLOR_RGB2BGR)
    # arr = cv2.cvtColor(arr, cv2.COLOR_RGBA2BGRA) # if dealing with 4-channel RGBA (transparent) image

    success, im = cv2.imencode('.png', arr)
    headers = {'Content-Disposition': 'inline; filename="test.png"'}
    return Response(im.tobytes(), headers=headers, media_type='image/png')

Client side:

On client side, you can write the raw bytes to a file, or use the numpy.frombuffer() function and cv2.imdecode() function to decompress the buffer into an image format (similar to this)—cv2.imdecode() does not require a file extension, as the correct codec will be deduced from the first bytes of the compressed image in the buffer.

url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url) 

# write raw bytes to file
with open('test.png', 'wb') as f:
    f.write(r.content)

# or, convert back to image format    
# arr = np.frombuffer(r.content, np.uint8)
# img_np = cv2.imdecode(arr, cv2.IMREAD_UNCHANGED)
# cv2.imwrite('test.png', img_np)

Useful Information

Since you noted that you would like the image displayed similar to a FileResponse, using a custom Response to return the bytes should be the way to do this, instead of using StreamingResponse (as shown in your question). To indicate that the image should be viewed in the browser, the HTTP response should include the following Content-Disposition header, as described here and as shown in the above examples (the quotes around the filename are required, if the filename contains special characters):

headers = {'Content-Disposition': 'inline; filename="test.png"'}

Whereas, to have the image downloaded rather than viewed (use attachment instead of inline):

headers = {'Content-Disposition': 'attachment; filename="test.png"'}

If you would like to display (or download) the image using a JavaScript interface, such as Fetch API or Axios, have a look at the answers here and here.

As for StreamingResponse, if the entire numpy array/image is already loaded into memory, StreamingResponse would not be necessary at all (and certainly, should not be the preferred choice for returning data that is already loaded in memory to the client). StreamingResponse streams by iterating over the chunks provided by your iter() function. As shown in the implementation of StreamingResponse class, if the iterator/generator you passed is not an AsyncIterable, a thread from the external threadpool—see this answer for more details on that threadpool—will be spawned to run the synchronous iterator you passed, using Starlette's iterate_in_threadpool() function, in order to avoid blocking the event loop. It should also be noted that the Content-Length response header is not set when using StreamingResponse (which makes sense, since StreamingResponse is supposed to be used when you don't know the size of the response beforehand), unlike other Response classes of FastAPI/Starlette that set that header for you, so that the browser will know where the data ends. It should be kept that way, as if the Content-Length header is included (of which its value should match the overall response body size in bytes), then to the server StreamingResponse would look the same as Response, as the server would not use transfer-encoding: chunked in that case (even though at the application level the two would still differ)—take a look at Uvicorn's documentation on response headers and MDN'S documentation on Transfer-Encoding: chunked for further details. Even in cases where you know the body size beforehand, but would still need using StreamingResponse, as it would allow you to load and transfer data by specifying the chunk size of your choice, unlike FileResponse (see later on for more details), you should ensure not setting the Content-Length header on your own, e.g., StreamingResponse(iterfile(), headers={'Content-Length': str(content_length)}), as this would result in the server not using transfer-encoding: chunked (regardless of the application delivering the data to the web server in chunks, as shown in the relevant implementation).

As described in this answer:

Chunked transfer encoding makes sense when you don't know the size of
your output ahead of time, and you don't want to wait to collect it
all to find out before you start sending it to the client. That can
apply to stuff like serving the results of slow database queries, but
it doesn't generally apply to serving images.

Even if you would like to stream an image file that is saved on the disk, file-like objects, such as those created by open(), are normal iterators; thus, you could return them directly in a StreamingResponse, as described in the documentation and as shown below (if you find yield from f being rather slow, when using StreamingResponse, please have a look at this answer on how to read the file in chunks with the chunk size of your choice—which should be set based on your needs, as well as your server's resources). It should be noted that using FileResponse would also read the file contents into memory in chunks, instead of the entire contents at once. However, as can be seen in the implementation of FileResponse class, the chunk size used is pre-defined and set to 64KB. Thus, based on one's requirements, they should decide on which of the two Response classes they should use.

@app.get('/image')
def get_image():
    def iterfile():  
        with open('test.png', mode='rb') as f:  
            yield from f  
            
    return StreamingResponse(iterfile(), media_type='image/png')

Or, if the image was already loaded into memory instead, and then saved into a BytesIO buffered stream, since BytesIO is a file-like object (like all the concrete classes of io module), you could return it directly in a StreamingResponse (or, preferably, simply call buf.getvalue() to get the entire image bytes and return them using a custom Response directly, as shown earlier). In case of returning the buffered stream, as shown in the example below, please remember to call buf.seek(0), in order to rewind the cursor to the start of the buffer, as well as call close() inside a background task, in order to discard the buffer, once the response has been sent to the client.

from fastapi import BackgroundTasks

@app.get('/image')
def get_image(background_tasks: BackgroundTasks):
    # supposedly, the buffer already existed in memory
    arr = create_img()
    im = Image.fromarray(arr)
    buf = BytesIO()
    im.save(buf, format='PNG')

    # rewind the cursor to the start of the buffer
    buf.seek(0)
    # discard the buffer, after the response is returned
    background_tasks.add_task(buf.close)
    return StreamingResponse(buf, media_type='image/png')

Thus, in your case scenario, the most suited approach would be to return a custom Response directly, including your custom content and media_type, as well as setting the Content-Disposition header, as described earlier, so that the image is viewed in the browser.

Option 2 - Return image as JSON-encoded numpy array

The below should not be used for displaying the image in the browser, but it is rather added here for the sake of completeness, showing how to convert an image into a numpy array (preferably, using asarray() function), then return the data in JSON format, and finally, convert the data back to image on client side, as described in this and this answer. For faster alternatives to the standard Python json library, see this answer.

Using PIL

Server side:

from PIL import Image
import numpy as np
import json

@app.get('/image')
def get_image():
    im = Image.open('test.png')
    # im = Image.open('test.png').convert('RGBA') # if dealing with 4-channel RGBA (transparent) image 
    arr = np.asarray(im)
    return json.dumps(arr.tolist())

Client side:

import requests
from PIL import Image
import numpy as np
import json

url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url) 
arr = np.asarray(json.loads(r.json())).astype(np.uint8)
im = Image.fromarray(arr)
im.save('test_received.png')

Using OpenCV

Server side:

import cv2
import json

@app.get('/image')
def get_image():
    arr = cv2.imread('test.png', cv2.IMREAD_UNCHANGED)
    return json.dumps(arr.tolist())

Client side:

import requests
import numpy as np
import cv2
import json

url = 'http://127.0.0.1:8000/image'
r = requests.get(url=url) 
arr = np.asarray(json.loads(r.json())).astype(np.uint8)
cv2.imwrite('test_received.png', arr)
~没有更多了~

关于作者

柠檬色的秋千

暂无简介

文章
评论
26 人气
更多

推荐作者

helenabai_sz

文章 0 评论 0

993438968

文章 0 评论 0

情未る

文章 0 评论 0

纪平伟

文章 0 评论 0

bobowiki

文章 0 评论 0

更多

友情链接

我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文