有了所有这些好处,如果我们考虑到大多数时候您仍然希望拥有比总提交大小更多的物理 RAM,并且考虑到现代 CPU 支持直接内置于硬件中的虚拟地址映射,那么拥有虚拟内存的开销经理实际上非常少。另一方面,在来自许多不同供应商的许多应用程序同时运行的环境中,进程地址空间是无价的。
It is true that with virtual memory, you are able to have your programs commit (i.e. allocate) a total of more memory that physically available. However, this is only one of many benefits if having virtual memory and it's not even the most important one. Personally, when I use a PC, I periodically check task manager to see how close I come to using my actual RAM. If I constantly go over, I go and I buy more RAM.
The key attribute of all OSes that use virtual memory is that every process has its own isolated address space. That means you can have a machine with 1GB of RAM and have 50 processes running but each one will still have 4GB of addressable memory space (32-bit OS assumed). Why is it important? It's not that you can "fake things out" and use RAM that isn't there. As soon as you go over and swapping starts, your virtual memory manager will begin thrashing and performance will come a halt. A much more important implication of this is that if each program has it's own address space, there's no way it can write to any random memory location and affect another program.
That's the main advantage: stability/reliability. In Windows 95, you could write an application that would crash entire operating system. In W2K+, it is simply impossible to write a program that paves all over its own address space and crashes anything other than self.
There are few other advantages as well. When executables and DLLs are loaded into RAM, virtual memory manager can detect when the same binary is loaded more than once and it will make multiple processes share the same physical RAM. At virtual memory level, it appears as if each process has its own copy, but at a lower level, it all gets mapped to one spot. This speeds up program startup and also optimizes memory usage since each DLL is only loaded once.
Virtual memory managers also allow you to perform file I/O by simply mapping files to pages in the virtual address space. In addition to introducing interesting alternative to working with files, this also allows for implementations of shared memory segments which is when physical RAM with read/write pages is intentionally shared between processes for extremely efficient inter-process communications (IPC).
With all these benefits, if we consider that most of the time you still want to shoot for having more physical RAM than total commit size and consider that modern CPUs have support for virtual address mapping built directly into the hardware, the overhead of having virtual memory manager is actually very minimal. On the other hand, in environments where many applications from many different vendors run concurrently, process address space is priceless.
I'm going to dump my understanding of this matter, with absolutely no background credentials to back it up. Gonna get downvoted? :)
First up, by saying primary memory is comparable to secondary memory, I assume you mean in terms of space. (Afterall, accessing RAM is faster than accessing storage).
Now, as I understand it,
Random Access Memory is limited by Address Space, which is the addresses which the operating system can store stuff in. A 32bit operating system is limited to roughly 4gb of RAM, while 64bit operating systems are (theoretically) limited to 2.3EXABYTES of RAM, although Windows 7 limits it to 200gb for Ultimate edition, and 2tb for Server 2008.
Of course, there are still multiple factors, such as
cost to manufacture RAM. (8gb on a single ram thingie(?) still in the hundreds)
dimm slots on motherboards (I've seen boards with 4 slots)
But for the purpose of this discussion let us ignore these limitations, and talk just about space.
Let us talk about how applications nowadays deal with memory. Applications do not know how much memory exists - for the most part, it simply requisitions it from the operating system. The operating system is the one responsible for managing which address spaces have been allocated to each application that is running. If it does not have enough, well, bad things happen.
But, surely with theoretical 2EXABYTES of RAM, you'd never run out?
Well, a famous person long ago once said we'd never need more than 64kBs of RAM.
Because most Applications nowadays are greedy (they take as much as the operating system is willing to give), if you ran enough applications, on a powerful enough computer, you could theoretically exceed the storage limits of the physical memory. In that case, Virtual Memory would be required to make up the extra required memory.
So to answer your question: (in my humble opinion formed from limited knowledge on the matter,) yes you'd still need to implement virtual memory.
Obviously take all this and do your own research. I'm turning this into a community wiki so others can edit it or just delete it if it is plain wrong :)
发布评论
评论(3)
确实,通过虚拟内存,您可以让程序提交(即分配)比物理可用的内存更多的内存。然而,这只是拥有虚拟内存的众多好处之一,甚至不是最重要的一项。就我个人而言,当我使用 PC 时,我会定期检查任务管理器,看看我距离使用实际 RAM 有多近。如果我不断地过去,我就会去购买更多的内存。
所有使用虚拟内存的操作系统的关键属性是每个进程都有自己独立的地址空间。这意味着您可以拥有一台具有 1GB RAM 的机器并运行 50 个进程,但每个进程仍将具有 4GB 的可寻址内存空间(假定为 32 位操作系统)。为什么它很重要?这并不是说您可以“伪造”并使用不存在的 RAM。一旦你过去并开始交换,你的虚拟内存管理器将开始崩溃并且性能将停止。更重要的含义是,如果每个程序都有自己的地址空间,那么它就无法写入任何随机内存位置并影响另一个程序。
这是主要优点:稳定性/可靠性。在 Windows 95 中,您可能编写一个会导致整个操作系统崩溃的应用程序。在 W2K+ 中,根本不可能编写一个程序来铺满自己的地址空间并导致除自身之外的任何内容崩溃。
还有一些其他优点。当可执行文件和 DLL 加载到 RAM 中时,虚拟内存管理器可以检测到同一二进制文件何时被多次加载,这将使多个进程共享相同的物理 RAM。在虚拟内存级别,似乎每个进程都有自己的副本,但在较低级别,它全部映射到一个位置。这加快了程序启动速度,并优化了内存使用,因为每个 DLL 仅加载一次。
虚拟内存管理器还允许您通过简单地将文件映射到虚拟地址空间中的页面来执行文件 I/O。除了引入有趣的文件处理替代方案之外,这还允许实现共享内存段,即有意在进程之间共享具有读/写页面的物理 RAM,以实现极其高效的进程间通信 (IPC)。
有了所有这些好处,如果我们考虑到大多数时候您仍然希望拥有比总提交大小更多的物理 RAM,并且考虑到现代 CPU 支持直接内置于硬件中的虚拟地址映射,那么拥有虚拟内存的开销经理实际上非常少。另一方面,在来自许多不同供应商的许多应用程序同时运行的环境中,进程地址空间是无价的。
It is true that with virtual memory, you are able to have your programs commit (i.e. allocate) a total of more memory that physically available. However, this is only one of many benefits if having virtual memory and it's not even the most important one. Personally, when I use a PC, I periodically check task manager to see how close I come to using my actual RAM. If I constantly go over, I go and I buy more RAM.
The key attribute of all OSes that use virtual memory is that every process has its own isolated address space. That means you can have a machine with 1GB of RAM and have 50 processes running but each one will still have 4GB of addressable memory space (32-bit OS assumed). Why is it important? It's not that you can "fake things out" and use RAM that isn't there. As soon as you go over and swapping starts, your virtual memory manager will begin thrashing and performance will come a halt. A much more important implication of this is that if each program has it's own address space, there's no way it can write to any random memory location and affect another program.
That's the main advantage: stability/reliability. In Windows 95, you could write an application that would crash entire operating system. In W2K+, it is simply impossible to write a program that paves all over its own address space and crashes anything other than self.
There are few other advantages as well. When executables and DLLs are loaded into RAM, virtual memory manager can detect when the same binary is loaded more than once and it will make multiple processes share the same physical RAM. At virtual memory level, it appears as if each process has its own copy, but at a lower level, it all gets mapped to one spot. This speeds up program startup and also optimizes memory usage since each DLL is only loaded once.
Virtual memory managers also allow you to perform file I/O by simply mapping files to pages in the virtual address space. In addition to introducing interesting alternative to working with files, this also allows for implementations of shared memory segments which is when physical RAM with read/write pages is intentionally shared between processes for extremely efficient inter-process communications (IPC).
With all these benefits, if we consider that most of the time you still want to shoot for having more physical RAM than total commit size and consider that modern CPUs have support for virtual address mapping built directly into the hardware, the overhead of having virtual memory manager is actually very minimal. On the other hand, in environments where many applications from many different vendors run concurrently, process address space is priceless.
我将放弃我对此事的理解,绝对没有背景凭据来支持它。会被否决吗? :)
首先,通过说主内存与辅助内存相当,我假设您指的是空间。 (毕竟,访问 RAM 比访问存储更快)。
现在,据我了解,
随机存取存储器受到地址空间的限制,地址空间是操作系统可以存储内容的地址。32 位操作系统大致限制为 4gb RAM,而 64 位操作系统(理论上)仅限于2.3EXABYTES 内存,尽管 Windows 7 有限制到 旗舰版为 200GB,2tb对于 Server 2008。
当然,还有多种因素,例如
主板上的 dimm 插槽(我见过有 4 个插槽的主板)
但为了讨论的目的,让我们忽略这些限制,只讨论空间。
让我们谈谈现在的应用程序如何处理内存。应用程序不知道存在多少内存——在大多数情况下,它只是从操作系统申请内存。操作系统负责管理已分配给每个正在运行的应用程序的地址空间。如果没有足够的,那么,糟糕的事情就会发生。
但是,理论上 2EXABYTES 的 RAM 肯定永远不会用完吗?
很久以前,一位名人曾经说过,我们永远不需要超过 64kB 的 RAM。
因为现在大多数应用程序都是贪婪的(操作系统愿意提供多少,它们就获取多少),如果您在足够强大的计算机上运行足够多的应用程序,理论上您可能会超出物理内存的存储限制。在这种情况下,需要虚拟内存来弥补所需的额外内存。
因此,回答您的问题:(以我对此事的有限知识形成的拙见)是的,您仍然需要实现虚拟内存。
显然,接受所有这些并进行自己的研究。我正在把它变成一个社区维基,这样其他人就可以编辑它,或者如果它是完全错误的,就删除它:)
I'm going to dump my understanding of this matter, with absolutely no background credentials to back it up. Gonna get downvoted? :)
First up, by saying primary memory is comparable to secondary memory, I assume you mean in terms of space. (Afterall, accessing RAM is faster than accessing storage).
Now, as I understand it,
Random Access Memory is limited by Address Space, which is the addresses which the operating system can store stuff in. A 32bit operating system is limited to roughly 4gb of RAM, while 64bit operating systems are (theoretically) limited to 2.3EXABYTES of RAM, although Windows 7 limits it to 200gb for Ultimate edition, and 2tb for Server 2008.
Of course, there are still multiple factors, such as
cost to manufacture RAM. (8gb on a single ram thingie(?) still in the hundreds)
dimm slots on motherboards (I've seen boards with 4 slots)
But for the purpose of this discussion let us ignore these limitations, and talk just about space.
Let us talk about how applications nowadays deal with memory. Applications do not know how much memory exists - for the most part, it simply requisitions it from the operating system. The operating system is the one responsible for managing which address spaces have been allocated to each application that is running. If it does not have enough, well, bad things happen.
But, surely with theoretical 2EXABYTES of RAM, you'd never run out?
Well, a famous person long ago once said we'd never need more than 64kBs of RAM.
Because most Applications nowadays are greedy (they take as much as the operating system is willing to give), if you ran enough applications, on a powerful enough computer, you could theoretically exceed the storage limits of the physical memory. In that case, Virtual Memory would be required to make up the extra required memory.
So to answer your question: (in my humble opinion formed from limited knowledge on the matter,) yes you'd still need to implement virtual memory.
Obviously take all this and do your own research. I'm turning this into a community wiki so others can edit it or just delete it if it is plain wrong :)
虚拟内存工作
这可能不能回答您的整个问题。但对我来说似乎是
Virtual memory working
It may not ans your whole question. But it seems the ans to me