将个人插入 file.owl
我正在尝试使用 Sparql update 1.1 将个人插入到我的文件 owl 中,但它不起作用。如果有人有例子,请毫不犹豫地给我答案。
public static void main(String[] args) {
// TODO Auto-generated method stub
String ont="http://localhost:8080/webdav/elearning";
OntModel model=ModelFactory.createOntologyModel(PelletReasonerFactory.THE_SPEC);
model.read(ont+".owl");
String requete="PREFIX table: <http://www.owl-ontologies.com/Ontology1239120737.owl#>\r\n" +
"INSERT DATA { table:etud1 :APourNom 'Saleh' ." +
" table:etud1 :APourLogin 'saleh' ." +
" table:etud1 :APourPWD 'saleh' . }";
GraphStore graphstore=GraphStoreFactory.create();
graphstore.setDefaultGraph(model.getGraph());
UpdateRequest updaterequest=UpdateFactory.create(requete);
updaterequest.exec(graphstore);
System.out.println("OK");
}
当我运行程序时,我收到一条消息,告诉我一切正常,但是当我打开本体时,我没有找到插入的元素。请帮我解决这个问题。
I'm trying to insert an individual into my file owl with Sparql update 1.1 but it doesn't work. If any one has an example, please don't hesitate to give me an answer.
public static void main(String[] args) {
// TODO Auto-generated method stub
String ont="http://localhost:8080/webdav/elearning";
OntModel model=ModelFactory.createOntologyModel(PelletReasonerFactory.THE_SPEC);
model.read(ont+".owl");
String requete="PREFIX table: <http://www.owl-ontologies.com/Ontology1239120737.owl#>\r\n" +
"INSERT DATA { table:etud1 :APourNom 'Saleh' ." +
" table:etud1 :APourLogin 'saleh' ." +
" table:etud1 :APourPWD 'saleh' . }";
GraphStore graphstore=GraphStoreFactory.create();
graphstore.setDefaultGraph(model.getGraph());
UpdateRequest updaterequest=UpdateFactory.create(requete);
updaterequest.exec(graphstore);
System.out.println("OK");
}
When I run my program, I get a message that tells me that all is ok but when I open my ontology, I don't find the element inserted. Please help me to resolve this problem.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您在这里所做的是更新内存中的图形对象,其中包含本体内容的副本,该内容最初是从
http://localhost:8080/webdav/elearning 获取的
。该副本与位于端口 8080 上的应用程序服务器托管的文档不同,因此您在原始应用程序服务器托管文档中看不到更改也就不足为奇了。基本上有四种方法可以实现此目的:
您可以使用特定于 RDF 的数据服务器,例如 Fuseki。然后,正确配置后,它将能够直接响应 SPARQL 更新请求 - 因此,您无需将文档复制到本地图表中,而是就地更新图表。
您可以像现在一样进行操作,然后当您完成更新图表后,您可以安排将更新后的图表通过 HTTP POST 返回到端口 8080 上的应用程序服务器。这将要求您为更新(例如
http://localhost:8080/webdav/elearning/update
)和合适的处理程序,但它是完全可行的。您可以访问文件系统上的文件,而不是从 Web URL 访问本体。不过,一旦完成更新,您就必须将更新后的图表内容保存回文件系统。
您可以将其加载到持久存储中,例如 TDB。通过这种方法,您可以打开直接连接到存储的 Jena
Dataset
,并针对该数据集运行 SPARQL 查询和更新。无需单独保存任何内容:所有更新都将直接进入 TDB 实例。您的原始示例使用 Pellet 作为推理引擎。如果这对您的应用程序很重要,那么您确实需要内存中的数据副本以提高效率。因此,上面的第二个和第三个解决方案比其他解决方案更适合。
What you are doing here is to update an in-memory graph object which contains a copy the contents of your ontology, which you originally fetched from
http://localhost:8080/webdav/elearning
. That copy is not the same as the document hosted by the application server sitting on port 8080, so it's not surprising that you don't see changes in the original app-server hosted document.There are basically four ways you can go about this:
Instead of a normal web server (like Tomcat or Jetty), you have an RDF-specific data server, such as Fuseki. Then, correctly configured, it will be able to respond to SPARQL update requests directly - so rather than copy the document into a local graph you update the graph in-situ.
You could do as you are doing now, and then when you are finished updating the graph you can arrange to HTTP POST the updated graph back to the application server on port 8080. This will require you to set up a route for the update (e.g.
http://localhost:8080/webdav/elearning/update
) and a suitable handler, but it's quite doable.Instead of accessing your ontology from a web URL, you could instead access a file on your file system. Again, though, once you are finished updating you will have to save the contents of the updated graph back to the file system.
Instead of accessing your ontology from a web URL, you could instead load it into a persistent store such as TDB. With this approach, you open up a Jena
Dataset
which connects directly to the store, and run the SPARQL queries and updates against that. There is no need to separately save anything: all updates will go directly into the TDB instance.Your original sample uses Pellet as a reasoning engine. If this is important for your application, then really you do want a copy of the data in-memory for efficiency. Thus the second and third solutions above would be better suited than the others.