事物如何在视觉上表现出来?

发布于 2024-08-15 01:55:51 字数 120 浏览 4 评论 0原文

我是初学者,我想知道图片、视频、窗口和按钮等如何在屏幕上直观地表示。我不是问它是由 gtk 还是 wxwidgets 制成的,我的问题是使像素以它们的方式出现背后的基本思想是什么。 GUI 库究竟使用什么来将它们显示在屏幕上?

I'm a beginner and I was wondering how pictures,video,windows and buttons etc are represented visually on the screen. I'm not asking whether it was made from for example gtk or wxwidgets, My question is what is the fundamental idea behind making the pixels come up the way they do. And what exactly does GUI library use to put them on the screen?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

稀香 2024-08-22 01:55:51

最基本的是,操作系统公开一组基本绘图 API(gdi、directx、gdi+、opengl),然后调用显示驱动程序并稍后更新“视频内存”。在 DOS 时代,您可以手动更新它,但是随着硬件系统数量的增加,更新变得越来越困难,因此您可以指示视频驱动程序为您完成此操作。

现在,一旦进入视频内存,信息就会按顺序发送到显示器,逐行扫描(逐行读取)。如果您在将视频内存上传到显示器时更新视频内存,则会出现所谓的撕裂现象(游戏中的垂直同步设置会避免这种现象)。

为了避免在上传过程中撕裂和锁定视频内存,通常会使用一种称为双缓冲的技术,即您的显卡上实际上有两个“视频内存”缓冲区,当您完成上传到一个并且显示器开始扫描后,卡上传第一个缓冲区,并让您将新信息写入第二个缓冲区,从而并行化该过程。

注意:这是关于它的 2D 部分,因为这就是您似乎要问的。 3D 部分类似,但它有一个附加层,一旦您将顶点传递给显示驱动程序,它就会将它们投影到“屏幕空间”中,并将它们逐行扫描线上传到视频内存,稍后再上传到您的视频内存中。监视器。

At its most basic, the operating system exposes a set of base drawing apis (gdi, directx, gdi+, opengl) which then call the display driver and later updates the "video memory". Back in the DOS days, you could update it manually, but it's become increasingly difficult with the large number of hardware systems out there, so you instruct the video driver to do it for you.

Now once it's in the video memory, the information gets sent sequentially to your monitor, scan line by scan line (read row by row). If you update the video memory while it's being uploaded to your monitor, you get what's called tearing (the thing v-sync settings in games avoid).

To avoid tearing and locking the video memory during this upload, a technique called double buffering is usually used, where you actually have two "video memory" buffers on your graphics card, and after you finish uploading to one and the monitor scanning starts, the card uploads the first buffer and lets you write your new information to the second buffer, thus parallelizing the process.

Note: this is about the 2D part of it, since that's what you seem to be asking. The 3D part is similar, but it has an additional layer, once you pass in the vertexes to your display driver, it projects them in "screen space" and uploads them scan line by scan line to video memory, which is later uploaded to your monitor.

梨涡少年 2024-08-22 01:55:51

阅读维基百科关于光栅图形的条目,它涵盖了基础知识。

Read Wikipedia's entry on raster graphics, it covers the basics.

我恋#小黄人 2024-08-22 01:55:51

最后一个问题的简短回答:GUI 库调用操作系统函数来绘制 UI 元素,进而调用适合显示硬件的驱动程序。驱动程序通过写入其外部可访问端口来向硬件发送命令,这些端口被映射到计算机内存或 I/O 区域上的特殊区域(请参阅 维基百科上的设备驱动程序)。

Short answer for the last question: GUI libraries call operating system functions to draw UI elements, which in turn call the appropriate driver for the display hardware. The driver sends commands to the hardware by writing into its externally accessible ports, wich are mapped to a special area on the computer's memory or I/O area (see Device Driver on Wikipedia).

凡尘雨 2024-08-22 01:55:51

概述

通常,原始形式的像素数据分组称为帧缓冲区。这是一个由相同大小的值组成的一维数组。每个值的大小取决于所使用的颜色空间颜色深度。例如,一个 32 位 RGBA 帧缓冲区可以在 C 中这样定义:unsigned int fb[width*height];,假设 sizeof int 为 4。

硬件通常将 RGB 颜色空间用于硬件图形(例如 OpenGL 和 DirectX),并使用直接帧缓冲区访问(例如 Linux 上的 /dev/fb 和 Windows 上的 GDI+)。电影的硬件解码通常取决于 YCbCr 色彩空间。某些图像文件格式也将数据保存在 RGB 以外的其他颜色空间中,但会转换为 RGB 传递给它们使用的 API。

像素颜色

假设 RGB 颜色空间,单个像素的颜色可以用以下方式表示(列表不完整): 单独的

颜色通道(真彩色)

真彩色是三个或四个颜色通道中每一个的一个组成部分,其中单独的颜色通道值代表每种颜色的强度。如果它有一个 alpha 通道,它的工作方式相同,但 alpha 的含义取决于上下文。通常alpha代表不透明度,最低值为100%半透明,最高值为100%不透明。例如:(0,255,0,128) 将表示 50% 半透明绿色,给定 RGBA 真彩色,每个通道使用 8 位颜色深度。

调色板

对于调色板,单个像素值是颜色数组的索引,通常是真彩色(但并不意味着每个通道 8 位)。索引的范围是可以表示的颜色数量。 8 位调色板索引以前很常见,尤其是在 20 世纪 90 年代初的商用 VGA 硬件上。然后,硬件可以一次显示 2^24 种颜色池中 256 种颜色的子集。

色彩空间

我不想让这个答案太长,而且我认为维基百科无论如何都比我更好地回答这个问题,所以这里有一个链接:维基百科:色彩空间
色彩空间是表示颜色和颜色组合的方法。

我为我的应用程序选择哪种色彩空间?

这取决于应用程序如何处理像素数据。不同的色彩空间定义不同的需求。 YCbCr 被电影使用,因为它定义了伽玛级别,而伽玛级别又由 NTSC 和 PAL 标准定义。 sRGB 执行相同的操作,仅适用于计算机显示器,您可以在其中为特定屏幕选择伽玛/颜色配置文件。当您在屏幕上感知的颜色与最终介质上感知的颜色尽可能接近时,这些颜色空间非常方便。当伽玛值不重要且计算机屏幕是最终媒介时,通常使用 RGB。由于色彩空间是线性的,因此很容易使用。因此,对于计算机游戏,您可能会使用 RGB,但对于 Photoshop 或 GIMP 等图像处理程序,您会支持 HSV/HSL/sRGB 和 CMYK。
当处理 API 返回的帧缓冲区中的原始像素时,您可以假设为 RGB,除非您另有说明。处理动画时,假设为 YCbCr。硬件支持多种不同的数据编码方式。确保您选择的格式考虑到了硬件支持和性能。

Overview

Usually, the grouping of pixel data in raw form is called a framebuffer. This is a one-dimensional array of values of the same size. The size of each value depends on the color space used, and the color depth. For example, a 32-bit RGBA framebuffer could be defined like this in C: unsigned int fb[width*height];, given that sizeof int is 4.

Hardware usually use the RGB color space for hardware graphics like OpenGL and DirectX, and direct framebuffer access like with /dev/fb on Linux and GDI+ on Windows. Hardware decoding of movies usually depends on the YCbCr color space. Some image file formats save their data in other color spaces than RGB too, but is converted passed as RGB to the API they use.

Pixel color

An individual pixel's color can be represented these ways (the list is not complete), assuming the RGB color space:

Seperate color channels (True-color)

True color is one component for each of the three or four color channels, where the separate color channel values represent the intensity of each color. If it has an alpha channel, it works the same way, but the meaning of the alpha is context-dependent. Usually alpha represents opacity, where the lowest value is 100% translucent and the highest value is 100% opaque. For example: (0,255,0,128) would represent 50% translucent green, given RGBA True-color, using 8-bits of color depth per channel.

Palette

With palettes, the individual pixel value is an index into an array of colors, usually true-color (does not imply 8-bits per channel though). The range of the index is the number of colors that can be represented. 8-bit palette indices was common before, especially on commodity VGA hardware in the early 1990s. The hardware could then display a subset of 256 colors at a time, out of a pool of 2^24 colors.

Color spaces

I don't want to make this answer too long, and I think Wikipedia answers this better than me anyway, so here is a link: Wikipedia: color space
Color spaces are ways to represent the colors and the combination of colors.

Which color space do I choose for my application?

It depends on what the application does with the pixel data. Different color-spaces defines different needs. YCbCr is used by moving pictures because it defines gamma levels, which again is defined by the NTSC and PAL standards for example. sRGB does the same thing, only for computer monitors, where you can select gamma/color profiles for your particular screen. Those color spaces are handy when it is important that the color you perceive on the screen is as close to the color perceived on the final medium. RGB is often used when gamma isn't important, and when the computer screen is the final medium. It is easy to work with, since the color space is linear. So, for a computer game, you would probably use RGB, but for an image manipulation program like Photoshop or GIMP, you would support HSV/HSL/sRGB and CMYK.
When working with raw pixels in a framebuffer returned by an API, you can assume RGB unless you figure out otherwise. When working with moving pictures, assume YCbCr. Hardware supports a lot of different ways to encode the data. Make sure you pick the format with respect to hardware support and performance.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文