无恒定输入的 DOS ASCII 动画滞后,Turbo C 编译
这是过去的怪事!
我正在为命令提示符编写一个 ASCII Pong 游戏(是的,老派),并且我直接写入视频内存(添加 0xB8000000),所以我知道我正在快速渲染(而不是 Gotoxy 然后 printf 渲染)
我的代码工作正常,代码在 Turbo C++ V1.01 下编译得很好,但是动画滞后......现在等等,有一个问题!在我的超快速增强涡轮戴尔酷睿 2 双核下,这似乎是合乎逻辑的,但是当我按住键盘上的某个键时,动画变得像新编译的婴儿屁股一样流畅。
我想也许是因为我让键盘缓冲区超载而减慢了计算机速度(真的吗?来吧...)但后来我很快就聪明起来并尝试使用 DJGPP 和 Tiny C Compiler 进行编译以测试结果是否相同。在 Tiny C 编译器上,我发现我无法编译“远”指针类型...仍然对此感到困惑,但我能够为 DJGPP 进行编译,并且动画运行顺利!
我想编译它并使其适用于 Turbo C++,但这个问题在过去 3 天里一直困扰着我,但没有解决。有谁知道为什么 Turbo C++ 对我的渲染方法(下面的代码)的常量调用会在命令提示符中滞后,但 DJGPP 不会?我不知道我是否正在编译为调试,我什至不知道如何检查我是否是。我确实将代码转换为 ASM,并且我在源代码的标题中看到了看似调试数据的内容,所以我不知道...
任何和所有评论和帮助将不胜感激!
这是我遇到的问题的一个简单示例,易于编译,因此请查看:
#include<stdio.h>
#include<conio.h>
#include<dos.h>
#include<time.h>
#define bX 80
#define bY 24
#define halfX bX/2
#define halfY bY/2
#define resolution bX*bY
#define LEFT 1
#define RIGHT 2
void GameLoop();
void render();
void clearBoard();
void printBoard();
void ballLogic();
typedef struct {
int x, y;
}vertex;
vertex vertexWith(int x, int y) {
vertex retVal;
retVal.x = x;
retVal.y = y;
return retVal;
}
vertex vertexFrom(vertex from) {
vertex retVal;
retVal.x = from.x;
retVal.y = from.y;
return retVal;
}
int direction;
char far *Screen_base;
char *board;
vertex ballPos;
void main() {
Screen_base = (char far*)0xB8000000;
ballPos = vertexWith(halfX, halfY);
direction = LEFT;
board = (char *)malloc(resolution*sizeof(char));
GameLoop();
}
void GameLoop() {
char input;
clrscr();
clearBoard();
do {
if(kbhit())
input = getch();
render();
ballLogic();
delay(50);
}while(input != 'p');
clrscr();
}
void render() {
clearBoard();
board[ballPos.y*bX+ballPos.x] = 'X';
printBoard();
}
void clearBoard() {
int d;
for(d=0;d<resolution;d++)
board[d] = ' ';
}
void printBoard() {
int d;
char far *target = Screen_base+d;
for(d=0;d<resolution;d++) {
*target = board[d];
*(target+1) = LIGHTGRAY;
++target;
++target;
}
}
void ballLogic() {
vertex newPos = vertexFrom(ballPos);
if(direction == LEFT)
newPos.x--;
if(direction == RIGHT)
newPos.x++;
if(newPos.x == 0)
direction = RIGHT;
else if(newPos.x == bX)
direction = LEFT;
else
ballPos = vertexFrom(newPos);
}
Here's an oddity from the past!
I'm writing an ASCII Pong game for the command prompt (Yes yes oldschool) and I'm writing to the video memory directly (Add. 0xB8000000) so I know I'm rendering quickly (As opposed to gotoxy and then printf rendering)
My code works fine, the code compiles fine under Turbo C++ V1.01 BUT the animation lags... now hold on hold on, there's a cavaet! Under my super fast boosted turbo Dell Core 2 Duo this seems logical however when I hold a key on the keyboard the animation becomes smooth as a newly compiled baby's bottom.
I thought maybe it was because I was slowing the computer down by overloading the keyboard buffer (wtf really? come on...) but then I quickly smartened up and tried compiling for DJGPP and Tiny C Compiler to test if the results are the same. On Tiny C Compiler I found I coulnd't compile 'far' pointer types... still confused on that one but I was able to compile for DJGPP and it the animation ran smoothly!
I want to compile this and have it work for Turbo C++ but this problem has been plagueing me for the past 3 days to no resolve. Does anyone know why the Turbo C++ constant calls to my rendering method (code below) will lag in the command prompt but DJGPP will not? I don't know if I'm compiling as debug or not, I don't even know how to check if I am. I did convert the code to ASM and I saw what looked to be debugging data at the header of the source so I don't know...
Any and all comments and help will be greatly appreciated!
Here is a quick example of what I'm up against, simple to compile so please check it out:
#include<stdio.h>
#include<conio.h>
#include<dos.h>
#include<time.h>
#define bX 80
#define bY 24
#define halfX bX/2
#define halfY bY/2
#define resolution bX*bY
#define LEFT 1
#define RIGHT 2
void GameLoop();
void render();
void clearBoard();
void printBoard();
void ballLogic();
typedef struct {
int x, y;
}vertex;
vertex vertexWith(int x, int y) {
vertex retVal;
retVal.x = x;
retVal.y = y;
return retVal;
}
vertex vertexFrom(vertex from) {
vertex retVal;
retVal.x = from.x;
retVal.y = from.y;
return retVal;
}
int direction;
char far *Screen_base;
char *board;
vertex ballPos;
void main() {
Screen_base = (char far*)0xB8000000;
ballPos = vertexWith(halfX, halfY);
direction = LEFT;
board = (char *)malloc(resolution*sizeof(char));
GameLoop();
}
void GameLoop() {
char input;
clrscr();
clearBoard();
do {
if(kbhit())
input = getch();
render();
ballLogic();
delay(50);
}while(input != 'p');
clrscr();
}
void render() {
clearBoard();
board[ballPos.y*bX+ballPos.x] = 'X';
printBoard();
}
void clearBoard() {
int d;
for(d=0;d<resolution;d++)
board[d] = ' ';
}
void printBoard() {
int d;
char far *target = Screen_base+d;
for(d=0;d<resolution;d++) {
*target = board[d];
*(target+1) = LIGHTGRAY;
++target;
++target;
}
}
void ballLogic() {
vertex newPos = vertexFrom(ballPos);
if(direction == LEFT)
newPos.x--;
if(direction == RIGHT)
newPos.x++;
if(newPos.x == 0)
direction = RIGHT;
else if(newPos.x == bX)
direction = LEFT;
else
ballPos = vertexFrom(newPos);
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
首先,在代码中:
您在变量
d
初始化之前使用它。我的假设是,如果您在 DOS 窗口中运行它,而不是启动到 DOS 并运行它,那么
kbhit
必须做更多的工作(间接 - 在 DOS 框提供的环境中)如果还没有排队的按键。这不会对您的运行时间产生太大影响,但我建议在没有按键的情况下,您将
input
显式设置为某个常量。另外,input
实际上应该是int
,而不是 char。其他建议:
vertexFrom
并没有真正做任何事情。应该能够替换为:
包含运算符的宏常量应该带有括号。
应该是:
这样你就永远不必担心运算符优先级,无论
Foo
的使用是什么代码。在 16 位 x86 PC 下,实际上有 4 个显示区域可以在之间切换。如果您可以在其中 2 个动画之间进行交换,那么您的动画应该会立即发生。这就是所谓的双缓冲。您有一个缓冲区充当当前显示缓冲区,另一个缓冲区充当工作缓冲区。然后,当您对工作缓冲区感到满意时(并且时间合适,如果您尝试以特定速率更新屏幕),则交换它们。我不记得如何做到这一点,但细节应该不难找到。我建议您可以单独保留初始缓冲区并在退出时恢复到它,以便程序将屏幕保留在其开始时的状态。此外,您可以使用其他缓冲区来保存调试输出和然后,如果您按住空格键或其他东西,则可以显示缓冲区。
如果您不想走这条路,并且“X”是唯一改变的东西,那么您可以放弃清除屏幕,而只清除“X”的最后一个位置。
屏幕缓冲区不是一个 2 字节单元的数组吗——一个用于显示字符,另一个用于属性?我想是的,所以我将其表示为一个数组:
这将使您不太可能根据偏移量犯错误。
我也可能将它们作为 16 位值而不是字节读取并写入缓冲区,尽管这应该不会产生很大的差异。
First, in the code:
you are using the variable
d
before it is initialized.My assumption is that if you are running this in a DOS window, rather than booting into DOS and running it, is that
kbhit
is having to do more work (indirectly -- within the DOS box's provided environment) if there isn't already a keypress queued up.This shouldn't effect your run time very much, but I suggest that in the event that there is no keypress you explicitly set the
input
to some constant. Also,input
should really be anint
, not a char.Other suggestions:
vertexFrom
doesn't really do anything.should be able to be replaced with:
Your macro constants that have operators in them should have parenthisis around them.
should be:
so that you never ever have to worry about operator precedence no matter what code surrounds uses of
Foo
.Under 16 bit x86 PCs there are actually 4 display areas that can be switched between. If you can swap between 2 of those for your animation, and your animations should appear to happen instantaneously. It's called Double Buffering. You have one buffer that acts as the current display buffer and one that is the working buffer. Then when you are satisfied with the working buffer (and the time is right, if you are trying to update the screen at a certain rate) then you swap them. I don't remember how to do this, but the particulars shouldn't be too difficult to find. I'd suggest that you might leave the initial buffer alone and restore back to it upon exit so that the program would leave the screen in just about the state that it started in. Also, you could use the other buffer to hold debug output and then if you held down the space bar or something that buffer could be displayed.
If you don't want to go that route and the 'X' is the only thing changing then you could forgo clearing the screen and just clear the last location of the 'X'.
Isn't the screen buffer an array of 2 byte units -- one for display character, and the other for the attributes? I think so, so I would represent it as an array of:
This would make it less likely for you to make mistakes based on offsets.
I'd also probably read and write them to the buffer as the 16 bit value, rather than the byte, though this shouldn't make a big difference.
我明白了为什么它没有立即渲染,我创建的计时器很好,问题是实际的clock_t只能精确到0.054547XXX左右,所以我只能以18fps渲染。我解决这个问题的方法是使用更准确的时钟......这是一个完全不同的故事
I figured out why it wasn't rendering right away, the timer that I created is fine the problem is that the actual clock_t is only accurate to .054547XXX or so and so I could only render at 18fps. The way I would fix this is by using a more accurate clock... which is a whole other story