从单个文本文件加载大量属性文件并插入 LinkedHashMap
我有一个文件,其中逐行包含大量属性文件,可能大约有 1000 个,每个属性文件将具有大约 5000 个键值对。例如:- 示例示例(abc.txt)-
abc1.properties
abc2.properties
abc3.properties
abc4.properties
abc5.properties
所以我打开这个文件,当它读取每一行时,我在 loadProperties 方法中加载属性文件。并将该属性的键值对存储在 LinkedHashMap 中。
public class Project {
public static HashMap<String, String> hashMap;
public static void main(String[] args) {
BufferedReader br = null;
hashMap = new LinkedHashMap<String, String>();
try {
br = new BufferedReader(new FileReader("C:\\apps\\apache\\tomcat7\\webapps\\examples\\WEB-INF\\classes\\abc.txt"));
String line = null;
while ((line = br.readLine()) != null) {
loadProperties(line);//loads abc1.properties first time
}
} catch (FileNotFoundException e1) {
e1.printStackTrace();
}
catch (IOException e) {
e.printStackTrace();
} finally {
try {
br.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
//I am loading each property file in this method. And checking whether the key
already exists in the hashMap if it exists in the hashMap then concatenate the
new key value with the previous key value. And keep on doing everytime you
find key exists.
private static void loadProperties(String line) {
Properties prop = new Properties();
InputStream in = Project.class.getResourceAsStream(line);
String value = null;
try {
prop.load(in);
for(Object str: prop.keySet()) {
if(hashMap.containsKey(str.toString())) {
StringBuilder sb = new StringBuilder().append(hashMap.get(str)).append("-").append(prop.getProperty((String) str));
hashMap.put(str.toString(), sb.toString());
} else {
value = prop.getProperty((String) str);
hashMap.put(str.toString(), value);
System.out.println(str+" - "+value);
}
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
try {
in.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
所以我的问题是,我有超过 1000 个属性文件,每个属性文件有超过 5000 个键值对。大多数属性文件具有相同的键但具有不同的值,因此如果键相同,我必须将该值与前一个值连接起来。那么随着属性文件的不断增加以及属性文件中的键值对的增加,LinkedHashMap 的大小是否有任何限制。那么这段代码是否已经优化到足以处理此类问题?
I have a file which contains lots of properties file line by line may be around 1000 and each properties file will be having around 5000 key-value pair. For eg:- Sample Example(abc.txt)-
abc1.properties
abc2.properties
abc3.properties
abc4.properties
abc5.properties
So I am opening this file and as it reads each line I am loading the properties file in loadProperties method. And storing key-value pair from that property in LinkedHashMap.
public class Project {
public static HashMap<String, String> hashMap;
public static void main(String[] args) {
BufferedReader br = null;
hashMap = new LinkedHashMap<String, String>();
try {
br = new BufferedReader(new FileReader("C:\\apps\\apache\\tomcat7\\webapps\\examples\\WEB-INF\\classes\\abc.txt"));
String line = null;
while ((line = br.readLine()) != null) {
loadProperties(line);//loads abc1.properties first time
}
} catch (FileNotFoundException e1) {
e1.printStackTrace();
}
catch (IOException e) {
e.printStackTrace();
} finally {
try {
br.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
//I am loading each property file in this method. And checking whether the key
already exists in the hashMap if it exists in the hashMap then concatenate the
new key value with the previous key value. And keep on doing everytime you
find key exists.
private static void loadProperties(String line) {
Properties prop = new Properties();
InputStream in = Project.class.getResourceAsStream(line);
String value = null;
try {
prop.load(in);
for(Object str: prop.keySet()) {
if(hashMap.containsKey(str.toString())) {
StringBuilder sb = new StringBuilder().append(hashMap.get(str)).append("-").append(prop.getProperty((String) str));
hashMap.put(str.toString(), sb.toString());
} else {
value = prop.getProperty((String) str);
hashMap.put(str.toString(), value);
System.out.println(str+" - "+value);
}
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
try {
in.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
So My Question is as I am having more than 1000 properties file and each properties file is having more than 5000 key-value pair. And most of the property file have the same key but with different values so I have to concatenate the value with the previous value if the key is same. So is there any limitation on the size with the LinkedHashMap as the property file keep on increasing and also the key-value pair in properties file. So this code is optimized enough to handle this kind of problem?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
除了为 JVM 分配的内存堆大小之外,Map 没有任何限制,并且可以使用选项
-Xmx
进行控制。从性能角度来看,您的代码没有问题。
但我可以建议以下改进。
避免先使用
hashMap.containsKey(str.toString())
,然后使用hashMap.get(str)
。containsKey(key)
实现为return get(key) != null
,因此您实际上调用了get()
两次。您可以这样说:值=map.get(key);
如果(值!= null){
值+=str;
}
map.put(key, value);
不要调用
str.toString()
。此调用只是创建另一个与原始实例相同的 String 实例。由于 Properties 类未参数化,请使用强制转换,即(String)str
。如果您仍然遇到性能问题,您可以先合并所有属性文件,然后使用
Properties.load()
加载它们一次。也许您会获得一些性能优势。Map does not have any limitations except size of your memory heap that you allocated for your JVM and can control using option
-Xmx
Your code is OK from performance perspective.
But I can suggest the following improvements.
Avoid using
hashMap.containsKey(str.toString())
and thenhashMap.get(str)
.containsKey(key)
is implemented asreturn get(key) != null
, so you actually callget()
twice. You can say something like the following instead:value = map.get(key);
if (value != null) {
value += str;
}
map.put(key, value);
Do not call
str.toString()
. This call just create yet another String instance equal to the original one. Since Properties class is not parametrized use casting instead i.e.(String)str
.If you still have performance problem you can merge all properties files first and then load them as using
Properties.load()
once. Probably you will get some performance benefits.