Char[] 到 Byte[] 以在 Web (java) 中进行输出优化
我刚刚在infoq的一个经验分享演示中找到了。它声称,如果在 servlet 中将 String 转换为 byte[],将会增加 QPS(每秒查询数?)。 代码示例显示了比较:
之前
private static String content = “…94k…”;
protected doGet(…){
response.getWrite().print(content);
}
结果
private static String content = “…94k…”;
Private static byte[] bytes = content.getBytes();
protected doGet(…){
response.getOutputStream().write(bytes);
}
页面大小
- (K)94
- max QPS 1800
之后的结果
- 页面大小(K)94
- max QPS 3500
谁能解释一下为什么要优化吗?我相信这是真的。
更新
以防我造成任何误导。我需要解释一下,原始演示文稿仅以此作为示例。他们实际上通过这种方式重构了速度引擎。但是这个源代码有点长。
实际上在演示中并没有暗示他们具体是如何做到的。但我发现了一些线索。
在 ASTText.java 中,他们缓存了 byte[] ctext 而不是 char[] ctext ,这大大提升了性能~!
就像上面的方式。这很有意义,对吧?
(但他们肯定也应该重构 Node 接口。Writer 无法写入 byte[]。这意味着使用 OutputStream 来代替!)
正如 Perception 建议的那样,实际上 Write Final 委托给了 StreamEncoder。而StreamEncoder write首先会将char[]变成byte[]。然后将其委托给OutputSteam来进行真正的写入。您可以轻松地参考源代码并证明它。 考虑到每次展示页面都会调用render方法,这样节省的成本将是相当可观的。
public class ASTText extends SimpleNode {
private char[] ctext;
/**
* @param id
*/
public ASTText(int id) {
super (id);
}
/**
* @param p
* @param id
*/
public ASTText(Parser p, int id) {
super (p, id);
}
/**
* @see org.apache.velocity.runtime.parser.node.SimpleNode#jjtAccept(org.apache.velocity.runtime.parser.node.ParserVisitor, java.lang.Object)
*/
public Object jjtAccept(ParserVisitor visitor, Object data) {
return visitor.visit(this , data);
}
/**
* @see org.apache.velocity.runtime.parser.node.SimpleNode#init(org.apache.velocity.context.InternalContextAdapter, java.lang.Object)
*/
public Object init(InternalContextAdapter context, Object data)
throws TemplateInitException {
Token t = getFirstToken();
String text = NodeUtils.tokenLiteral(t);
ctext = text.toCharArray();
return data;
}
/**
* @see org.apache.velocity.runtime.parser.node.SimpleNode#render(org.apache.velocity.context.InternalContextAdapter, java.io.Writer)
*/
public boolean render(InternalContextAdapter context, Writer writer)
throws IOException {
if (context.getAllowRendering()) {
writer.write(ctext);
}
return true;
}
}
I just find in an experence share presentation from infoq. It claims that if you convert the String to byte[] in servlet, it will increase the QPS (Queries per Second?).
The code example shows the comparison:
Before
private static String content = “…94k…”;
protected doGet(…){
response.getWrite().print(content);
}
After
private static String content = “…94k…”;
Private static byte[] bytes = content.getBytes();
protected doGet(…){
response.getOutputStream().write(bytes);
}
Result before
- page size(K)94
- max QPS 1800
Result after
- page size(K)94
- max QPS 3500
Can anyone explain why it was optimized? I trust it to be true.
UPDATE
In case I cause any misleading. I need explain that the original presentation only uses this as an example. They actually refactor the velocity engine by this way. BUt this source code is a bit long.
Actually in the presentation didn't imply how they do it in detail. But I found some lead.
In ASTText.java, they cached the byte[] ctext instead of char[] ctext , which boosts the performance a lot~!
Just like the way above. It makes a lot of sense,right?
(BUT definitely they should also refactor the Node interface. Writer cannot write byte[]. Which means using OutputStream instead!)
As Perception adviced actually a Write finally delegate to a StreamEncoder. And StreamEncoder write will first change char[] into byte[]. And then delegate it to the OutputSteam to do the real write. You can easily refer to the source code and prove it.
Considering render method will be called each time for showing the page, the saving of cost will be considerable.
public class ASTText extends SimpleNode {
private char[] ctext;
/**
* @param id
*/
public ASTText(int id) {
super (id);
}
/**
* @param p
* @param id
*/
public ASTText(Parser p, int id) {
super (p, id);
}
/**
* @see org.apache.velocity.runtime.parser.node.SimpleNode#jjtAccept(org.apache.velocity.runtime.parser.node.ParserVisitor, java.lang.Object)
*/
public Object jjtAccept(ParserVisitor visitor, Object data) {
return visitor.visit(this , data);
}
/**
* @see org.apache.velocity.runtime.parser.node.SimpleNode#init(org.apache.velocity.context.InternalContextAdapter, java.lang.Object)
*/
public Object init(InternalContextAdapter context, Object data)
throws TemplateInitException {
Token t = getFirstToken();
String text = NodeUtils.tokenLiteral(t);
ctext = text.toCharArray();
return data;
}
/**
* @see org.apache.velocity.runtime.parser.node.SimpleNode#render(org.apache.velocity.context.InternalContextAdapter, java.io.Writer)
*/
public boolean render(InternalContextAdapter context, Writer writer)
throws IOException {
if (context.getAllowRendering()) {
writer.write(ctext);
}
return true;
}
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
除了您没有调用相同的输出方法这一事实之外,在第二个示例中,您还避免了在将字符串写入输出流之前将字符串转换为字节的开销。尽管这些场景不太现实,但 Web 应用程序的动态特性不允许将所有数据模型预先转换为字节流。 而且,现在还没有严肃的架构可以让您像这样直接写入 HTTP 输出流。
Apart from the fact that you aren't calling the same output methods, in your second example you avoid the overhead of converting the String to bytes before writing it to the output stream. These scenarios are not very realistic though, the dynamic nature of web applications precludes pre-converting all your data models into byte streams. And, there are no serious architectures out there now where you will be writing directly to the HTTP output stream like this.