从流中删除开头和结尾字符
我有一个低级缓存机制,它从服务器接收 json 数组并将其缓存在文件中。
实际的缓存机制只是将 large 流保存到文件中,而不知道它是 json。因此,当我想通过将流聚合到另一个文件中将流附加到现有文件缓存时,我最终会得到这样的结果:
[{"id":3144,"created_at":"1322064201"}][{"id":3144,"created_at":"1322064201"}]
显然我想要的是这样的:
[{"id":3144,"created_at":"1322064201"},{"id":3144,"created_at":"1322064201"}]
执行此操作的最有效/最有效的方法是什么?
我已经研究过 FilterReader 但看到据我所知,我实际上需要做的就是删除现有缓存的最后一个字符 ]
和新内容的第一个字符 [
并添加 , 我认为可能有比检查更好的方法这些大流中的每个字符。
对于上下文,我的代码执行如下操作:
... input stream passed with new content
File newCache = new File("JamesBluntHatersClub")
FileOutputStream tempFileOutputStream = new FileOutputStream(newCache);
FileInputStream fileInputStream = new FileInputStream(existingCache);
copyStream(fileInputStream, tempFileOutputStream);
copyStream(inputStream, tempFileOutputStream);
... clean up
更新:
实现了 FilterReader 一次检查一个字符,如下所示:
@Override
public int read() throws IOException {
int content = super.read();
// replace open square brackets with comma
switch (content) {
case SQUARE_BRACKETS_OPEN:
return super.read();
case SQUARE_BRACKETS_CLOSE:
return super.read();
default:
return content;
}
}
处理时间慢得令人无法接受,所以我正在寻找另一种选择。我正在考虑使用文件大小来确定文件的大小并以这种方式删除尾部方括号
I have a low level caching mechanism which receives a json array from a server and caches it in a file.
The actual caching mechanism is just saving large streams to a file without awareness that it is json. Therefore when I would like to append a stream to an existing file cache by aggregating streams into another file I end up with something like this:
[{"id":3144,"created_at":"1322064201"}][{"id":3144,"created_at":"1322064201"}]
where obviously what I desire is something like this:
[{"id":3144,"created_at":"1322064201"},{"id":3144,"created_at":"1322064201"}]
What is the most efficient/effective way of doing this?
I have looked into FilterReader but seen as I know that all I actually need to do is remove the last char ]
of the existing cache and first char of new content [
and add a ,
I thought there may be a better way than checking every char in these big streams.
For context my code does something like this:
... input stream passed with new content
File newCache = new File("JamesBluntHatersClub")
FileOutputStream tempFileOutputStream = new FileOutputStream(newCache);
FileInputStream fileInputStream = new FileInputStream(existingCache);
copyStream(fileInputStream, tempFileOutputStream);
copyStream(inputStream, tempFileOutputStream);
... clean up
UPDATE:
Having implemented a FilterReader which checks chars one at a time like so:
@Override
public int read() throws IOException {
int content = super.read();
// replace open square brackets with comma
switch (content) {
case SQUARE_BRACKETS_OPEN:
return super.read();
case SQUARE_BRACKETS_CLOSE:
return super.read();
default:
return content;
}
}
the processing time is unacceptably slow so I am looking for another option. I was thinking about using the file size to determine the size of the file and removing the tail square bracket this way
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
这个方法成功了,
如果这是不好的做法,我会对这里非常感兴趣,我猜可能存在编码问题。
更新
添加了处理任一流中的空 json 数组的功能
This method did the trick
Would be very interested to here if this is bad practice, I am guessing there may be encoding issues.
UPDATE
Added ability to handle an empty json array in either stream