返回介绍

Jsoup tutorial

发布于 2025-02-22 22:20:11 字数 14596 浏览 0 评论 0 收藏 0

This is an introductory tutorial of the Jsoup HTML parser. In the tutorial we are going to parse HTML data form a HTML string, local HTML file, and a web page. We are going to sanitize data and perform a Google search.

Jsoup is a Java library for extracting and manipulating HTML data. It implements the HTML5 specification, and parses HTML to the same DOM as modern browsers. The project's web site is jsoup.org .

With Jsop we are able to:

  • scrape and parse HTML from a URL, file, or string
  • find and extract data, using DOM traversal or CSS selectors
  • manipulate the HTML elements, attributes, and text
  • clean user-submitted content against a safe white-list, to prevent XSS attacks
  • output tidy HTML
<dependency>
  <groupId>org.jsoup</groupId>
  <artifactId>jsoup</artifactId>
  <version>1.9.2</version>
</dependency>

In the examples of this tutorial, we have used the above Maven dependency.

The Jsoup class provides the core public access point to the jsoup functionality.

Parsing a HTML string

In the first example, we are going to parse a HTML string.

JSoupFromStringEx.java

package com.zetcode;

import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;

public class JSoupFromStringEx {

  public static void main(String[] args) {
    
    String htmlString = "<html><head><title>My title</title></head>"
        + "<body>Body content</body></html>";

    Document doc = Jsoup.parse(htmlString);
    String title = doc.title();
    String body = doc.body().text();
    
    System.out.printf("Title: %s%n", title);
    System.out.printf("Body: %s", body);
  }
}

The example parses a HTML string and outputs its title and body content.

String htmlString = "<html><head><title>My title</title></head>"
    + "<body>Body content</body></html>";

This string contains simple HTML data.

Document doc = Jsoup.parse(htmlString);

With the Jsoup's parse() method, we parse the HTML string. The method returns a HTML document.

String title = doc.title();

The document's title() method gets the string contents of the document's title element.

String body = doc.body().text();

The document's body() method returns the body element; its text() method gets the text of the element.

Parsing a local HTML file

In the second example, we are going to parse a local HTML file. We use the overloaded Jsoup.parse() method that takes a File object as its first parameter.

index.html

<!DOCTYPE html>
<html>
  <head>
    <title>My title</title>
    <meta charset="UTF-8">
  </head>
  <body>
    <div id="mydiv">Contents of a div element</div>
  </body>
</html>

For the example, we use the above HTML file.

JSoupFromFileEx.java

package com.zetcode;

import java.io.File;
import java.io.IOException;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;

public class JSoupFromFileEx {
  
  public static void main(String[] args) throws IOException {
    
    String fileName = "src/main/resources/index.html";
    
    Document doc = Jsoup.parse(new File(fileName), "utf-8"); 
    Element divTag = doc.getElementById("mydiv"); 
    
    System.out.println(divTag.text());
  }
}

The example parses the index.html file, which is located in the src/main/resources/ directory.

Document doc = Jsoup.parse(new File(fileName), "utf-8"); 

We parse the HTML file with the Jsoup.parse() method.

Element divTag = doc.getElementById("mydiv"); 

With the document's getElementById() method, we get the element by its ID.

System.out.println(divTag.text());

The text of the tag is retrieved with the element's text() method.

Reading a web site's title

In the following example, we scrape and parse a web page and retrieve the content of the title element.

JSoupTitleEx.java

package com.zetcode;

import java.io.IOException;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;

public class JSoupTitleEx {

  public static void main(String[] args) throws IOException {
    
    String url = "http://www.something.com";
    
    Document doc = Jsoup.connect(url).get();
    String title = doc.title();
    System.out.println(title);
  }
}

In the code example, we read the title of a specified web page.

Document doc = Jsoup.connect(url).get();

The Jsoup's connect() method creates a connection to the given URL. The get() method executes a GET request and parses the result; it returns a HTML document.

String title = doc.title();

With the document's title() method, we get the title of the HTML document.

Reading HTML source

The next example retrieves the HTML source of a web page.

JSoupHTMLSourceEx.java

package com.zetcode;

import java.io.IOException;
import org.jsoup.Jsoup;

public class JSoupHTMLSourceEx {

  public static void main(String[] args) throws IOException {
    
    String webPage = "http://www.something.com";

    String html = Jsoup.connect(webPage).get().html();

    System.out.println(html);
  }
}

The example prints the HTML of a web page.

String html = Jsoup.connect(webPage).get().html();

The html() method returns the HTML of an element; in our case the HTML source of the whole document.

Getting meta information

Meta information of a HTML document provides structured metadata about a Web page, such as its description and keywords.

JSoupMetaInfoEx.java

package com.zetcode;

import java.io.IOException;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;

public class JSoupMetaInfoEx {

  public static void main(String[] args) throws IOException {
    
    String url = "http://www.jsoup.org";
    
    Document document = Jsoup.connect(url).get();

    String description = document.select("meta[name=description]").first().attr("content");
    System.out.println("Description : " + description);

    String keywords = document.select("meta[name=keywords]").first().attr("content");
    System.out.println("Keywords : " + keywords);
  }
}

The code example retrieves meta information about a specified web page.

String keywords = document.select("meta[name=keywords]").first().attr("content");

The document's select() method finds elements that match the given query. The first() method returns the first matched element. With the attr() method, we get the value of the content attribute.

Parsing links

The next example parses links from a HTML page.

JSoupLinksEx.java

package com.zetcode;

import java.io.IOException;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;

public class JSoupLinksEx {

  public static void main(String[] args) throws IOException {
    
    String url = "http://jsoup.org";

    Document document = Jsoup.connect(url).get();
    Elements links = document.select("a[href]");
    
    for (Element link : links) {
      
      System.out.println("link : " + link.attr("href"));
      System.out.println("text : " + link.text());
    }
  }
}

In the example, we connect to a web page and parse all its link elements.

Elements links = document.select("a[href]");

To get a list of links, we use the document's select() method.

Sanitizing HTML data

Jsoup provides methods for sanitizing HTML data.

JsoupSanitizeEx.java

package com.zetcode;

import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.safety.Cleaner;
import org.jsoup.safety.Whitelist;

public class JsoupSanitizeEx {
  
  public static void main(String[] args) {
    
    String htmlString = "<html><head><title>My title</title></head>"
        + "<body><center>Body content</center></body></html>";

    boolean valid = Jsoup.isValid(htmlString, Whitelist.basic());
    
    if (valid) {
      
      System.out.println("The document is valid");
    } else {
      
      System.out.println("The document is not valid.");
      System.out.println("Cleaned document");
      
      Document dirtyDoc = Jsoup.parse(htmlString);
      Document cleanDoc = new Cleaner(Whitelist.basic()).clean(dirtyDoc);

      System.out.println(cleanDoc.html());
    }
  }
}

In the example, we sanitize and clean HTML data.

String htmlString = "<html><head><title>My title</title></head>"
    + "<body><center>Body content</center></body></html>";

The HTML string contains the center element, which is deprecated.

boolean valid = Jsoup.isValid(htmlString, Whitelist.basic());

The isValid() method determines whether the string is a valid HTML. A white list is a list of HTML (elements and attributes) that can pass through the cleaner. The Whitelist.basic() defines a set of basic clean HTML tags.

Document dirtyDoc = Jsoup.parse(htmlString);
Document cleanDoc = new Cleaner(Whitelist.basic()).clean(dirtyDoc);

With the help of the Cleaner , we clean the dirty HTML document.

The document is not valid.
Cleaned document
<html>
 <head></head>
 <body>
  Body content
 </body>
</html>

This is the output of the program. We can see that the center element was removed.

Performing a Google search

The following example performs a Google search with Jsoup.

JsoupGoogleSearchEx.java

package com.zetcode;

import java.io.IOException;
import java.util.HashSet;
import java.util.Set;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;

public class JsoupGoogleSearchEx {

  private static Matcher matcher;
  private static final String DOMAIN_NAME_PATTERN
      = "([a-zA-Z0-9]([a-zA-Z0-9\\-]{0,61}[a-zA-Z0-9])?\\.)+[a-zA-Z]{2,15}";
  private static Pattern patrn = Pattern.compile(DOMAIN_NAME_PATTERN);

  public static String getDomainName(String url) {

    String domainName = "";
    matcher = patrn.matcher(url);
    
    if (matcher.find()) {
      domainName = matcher.group(0).toLowerCase().trim();
    }
    
    return domainName;
  }

  public static void main(String[] args) throws IOException {

    String query = "Milky Way";

    String url = "https://www.google.com/search?q=" + query + "&num=10";

    Document doc = Jsoup
        .connect(url)
        .userAgent("Jsoup client")
        .timeout(5000).get();

    Elements links = doc.select("a[href]");

    Set<String> result = new HashSet<>();

    for (Element link : links) {

      String attr1 = link.attr("href");
      String attr2 = link.attr("class");
      
      if (!attr2.startsWith("_Zkb") && attr1.startsWith("/url?q=")) {
      
        result.add(getDomainName(attr1));
      }
    }

    for (String el : result) {
      System.out.println(el);
    }
  }
}

The example creates a search request for the "Milky Way" term. It prints ten domain names that match the term.

private static final String DOMAIN_NAME_PATTERN
    = "([a-zA-Z0-9]([a-zA-Z0-9\\-]{0,61}[a-zA-Z0-9])?\\.)+[a-zA-Z]{2,15}";
private static Pattern patrn = Pattern.compile(DOMAIN_NAME_PATTERN);

A Google search returns long links from which we want to get the domain names. For this we use a regular expression pattern.

public static String getDomainName(String url) {

  String domainName = "";
  matcher = patrn.matcher(url);
  
  if (matcher.find()) {
    domainName = matcher.group(0).toLowerCase().trim();
  }
  
  return domainName;
}

The getDomainName() returns a domain name from the search link using the regular expression matcher.

String query = "Milky Way";

This is our search term.

String url = "https://www.google.com/search?q=" + query + "&num=10";

This is the url to perform a Google search.

Document doc = Jsoup
    .connect(url)
    .userAgent("Jsoup client")
    .timeout(5000).get();

We connect to the url, set a 5 s time out, and send a GET request. A HTML document is returned.

Elements links = doc.select("a[href]");

From the document, we select the links.

Set<String> result = new HashSet<>();

for (Element link : links) {

  String attr1 = link.attr("href");
  String attr2 = link.attr("class");
  
  if (!attr2.startsWith("_Zkb") && attr1.startsWith("/url?q=")) {
  
    result.add(getDomainName(attr1));
  }
}

We look for links that do not have class="_Zkb" attribute and have href="/url?q=" attribute. Note that these are hard-coded values that might change in the future.

for (String el : result) {
  System.out.println(el);
}

Finally, we print the domain names to the console.

en.wikipedia.org
www.space.com
www.nasa.gov
sk.wikipedia.org
www.bbc.co.uk
imagine.gsfc.nasa.gov
www.forbes.com
www.milkywayproject.org
www.youtube.com
www.universetoday.com

These are top Google search results for the "Milky Way" term.

This tutorial was dedicated to the Jsoup HTML parser.

You might also be interested in the related tutorials: Java tutorial , Reading a web page in Java , Reading text files in Java , or Jtwig tutorial .

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文