返回介绍

solution / 0100-0199 / 0182.Duplicate Emails / README_EN

发布于 2024-06-17 01:04:03 字数 2528 浏览 0 评论 0 收藏 0

182. Duplicate Emails

中文文档

Description

Table: Person

+-------------+---------+
| Column Name | Type  |
+-------------+---------+
| id      | int   |
| email     | varchar |
+-------------+---------+
id is the primary key (column with unique values) for this table.
Each row of this table contains an email. The emails will not contain uppercase letters.

 

Write a solution to report all the duplicate emails. Note that it's guaranteed that the email field is not NULL.

Return the result table in any order.

The result format is in the following example.

 

Example 1:

Input: 
Person table:
+----+---------+
| id | email   |
+----+---------+
| 1  | a@b.com |
| 2  | c@d.com |
| 3  | a@b.com |
+----+---------+
Output: 
+---------+
| Email   |
+---------+
| a@b.com |
+---------+
Explanation: a@b.com is repeated two times.

Solutions

Solution 1: Group By + Having

We can use the GROUP BY statement to group the data by the email field, and then use the HAVING statement to filter out the email addresses that appear more than once.

import pandas as pd


def duplicate_emails(person: pd.DataFrame) -> pd.DataFrame:
  results = pd.DataFrame()

  results = person.loc[person.duplicated(subset=["email"]), ["email"]]

  return results.drop_duplicates()
# Write your MySQL query statement below
SELECT email
FROM Person
GROUP BY 1
HAVING COUNT(1) > 1;

Solution 2: Self-Join

We can use a self-join to join the Person table with itself, and then filter out the records where the id is different but the email is the same.

SELECT DISTINCT p1.email
FROM
  person AS p1,
  person AS p2
WHERE p1.id != p2.id AND p1.email = p2.email;

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文