将网站段落数据推向有组织的Google表列

发布于 2025-02-14 01:17:06 字数 7898 浏览 2 评论 0原文

我非常陌生,但是一直将脚本拼凑在一起,生活在YouTube上,并在这里获得帮助以实现这一目标。我非常接近我的最终目标,但似乎无法弄清楚到底是什么关闭:

下面的脚本删除了来自不同链接的信息。我有不同的版本,直到有人帮助完成了最佳方法(下图):

import requests
from bs4 import BeautifulSoup


def get_links(url):
    data = []
    req_url = requests.get(url)
    soup = BeautifulSoup(req_url.content, "html.parser")

    for td in soup.find_all('td', {'data-th': 'Player'}):
        a_tag = td.a
        name = a_tag.text
        player_url = a_tag['href']
        print(f"Getting {name}")

        req_player_url = requests.get(
            f"https://basketball.realgm.com{player_url}")
        soup_player = BeautifulSoup(req_player_url.content, "html.parser")
        div_profile_box = soup_player.find("div", class_="profile-box")
        row = {"Name": name, "URL": player_url}

        for p in div_profile_box.find_all("p"):
            try:
                key, value = p.get_text(strip=True).split(':', 1)
                row[key.strip()] = value.strip()
            except:     # not all entries have values
                pass

        data.append(row)

    return data


urls = [
    'https://basketball.realgm.com/dleague/players/2022',
    'https://basketball.realgm.com/dleague/players/2021',
    'https://basketball.realgm.com/dleague/players/2020',
]


for url in urls:
    print(f"Getting: {url}")
    data = get_links(url)

    for entry in data:
        print(entry)

然后,我弄清楚了如何将数据从各种视频推向Google表格,并制作了一个测试脚本,将不同的HREF标签推到G表上(下图) 。我对此感到非常满意:

import requests
from bs4 import BeautifulSoup

import gspread
gc = gspread.service_account(filename='creds.json')
sh = gc.open_by_key('1cD8mX8tR2iSQgSmpk6QxBVVHYVV8VxGs2uhtA8iBSpQ')
worksheet = sh.sheet1


profiles = []
urls = [
    'https://basketball.realgm.com/dleague/players/2022',
    'https://basketball.realgm.com/dleague/players/2021'
    'https://basketball.realgm.com/dleague/players/2020'
    'https://basketball.realgm.com/dleague/players/2019'
]
for url in urls:
    req = requests.get(url)
    soup = BeautifulSoup(req.text, 'html.parser')
    for profile in soup.find_all('a'):

        profile = profile.get('href')

        profiles.append(profile)

# print(profiles)

for p in profiles:
    if p.startswith('/player'):
        print(p)
        AddValue = [p]
        worksheet.append_row(AddValue)

但是,我的真正目标是将第一个脚本的输出(当前输出显示)推向表格,并将不同的值汇入有组织的列(例如代码下方的图像)。

{'Name': 'Khalil Iverson', 'URL': '/player/Khalil-Iverson/Summary/88527', 'Current Team': 'N/A', 'Born': 'Jul 19, 1997(24 years old)', 'Birthplace/Hometown': e, Ohio', 'Nationality': 'United States', 'Height': '6-5 (196cm)Weight:210 (95kg)', 'Current NBA Status': 'Unrestricted Free Agent', 'Agent': 'Mike Naiditch',
Entry': '2019 NBA Draft', 'Drafted': 'Undrafted', 'Pre-Draft Team': 'Wisconsin(Sr)', 'High School': 'Rutherford B. Hayes High School[Delaware, Ohio]'}
{'Name': 'Jarrett Jack', 'URL': '/player/Jarrett-Jack/Summary/366', 'Born': 'Oct 28, 1983(38 years old)', 'Birthplace/Hometown': 'Fort Washington, Maryland', lity': 'United States', 'Height': '6-3 (191cm)Weight:200 (91kg)', 'Hand': 'Right', 'Current NBA Status': 'Unrestricted Free Agent', 'Agent': 'Sam Goldfeder', ntry': '2005 NBA Draft', 'Early Entry Info': '2005 Early Entrant', 'Drafted': 'Round 1, Pick 22, Denver Nuggets', 'Draft Rights Trade': 'DEN to POR, Jun 28, 2re-Draft Team': 'Georgia Tech(Jr)', 'High School': 'Worcester Academy[Worcester, Massachusetts]'}
{'Name': 'Kadeem Jack', 'URL': '/player/Kadeem-Jack/Summary/24639', 'Current Team': 'N/A', 'Born': 'Oct 27, 1992(29 years old)', 'Birthplace/Hometown': 'Queenork', 'Nationality': 'United States', 'Height': '6-9 (206cm)Weight:225 (102kg)', 'Current NBA Status': 'Unrestricted Free Agent', 'Agent': 'Chris Emens,Josh Bd-Bell', 'Draft Entry': '2015 NBA Draft', 'Drafted': 'Undrafted', 'Pre-Draft Team': 'Rutgers(Sr)', 'High School': 'CJEOTO Academy[Somerset, New Jersey]'}
{'Name': 'Demetrius Jackson', 'URL': '/player/Demetrius-Jackson/Summary/43395', 'Current Team': 'N/A', 'Born': 'Sep 7, 1994(27 years old)', 'Birthplace/Hometoshawaka, Indiana', 'Nationality': 'United States', 'Height': '6-1 (185cm)Weight:200 (91kg)', 'Current NBA Status': 'Unrestricted Free Agent', 'Agent': 'Rade Fh', 'Draft Entry': '2016 NBA Draft', 'Early Entry Info': '2016 Early Entrant', 'Drafted': 'Round 2, Pick 15, Boston Celtics', 'Pre-Draft Team': 'Notre Dame(Jrh School': 'Marian High School[Mishawaka, Indiana]', 'AAU Team': 'MBA Select'}
{'Name': 'Josh Jackson', 'URL': '/player/Josh-Jackson/Summary/52032', 'Full Name': "Joshua O'Neal Jackson", 'Current Team': 'N/A', 'Born': 'Feb 10, 1997(25 ye', 'Birthplace/Hometown': 'San Diego, California', 'Nationality': 'United States', 'Height': '6-8 (203cm)Weight:207 (94kg)', 'Current NBA Status': 'Unrestrict
Agent (Sacramento Kings)', 'Agent': 'B.J. Armstrong', 'Draft Entry': '2017 NBA Draft', 'Early Entry Info': '2017 Early Entrant', 'Drafted': 'Round 1, Pick 4, 
Suns', 'Pre-Draft Team': 'Kansas(Fr)', 'High School': 'Prolific Prep[Napa, California]', 'AAU Team': 'One Nation'}
{'Name': 'Pierre Jackson', 'URL': '/player/Pierre-Jackson/Summary/28285', 'Current Team': 'N/A', 'Born': 'Aug 29, 1991(30 years old)', 'Birthplace/Hometown': as, Nevada', 'Nationality': 'United States', 'Height': '5-10 (178cm)Weight:180 (82kg)', 'Current NBA Status': 'Unrestricted Free Agent', 'Agent': 'Charles Misrrison Gaines', 'Draft Entry': '2013 NBA Draft', 'Drafted': 'Round 2, Pick 12, Philadelphia Sixers', 'Draft Rights Trade': 'PHL to NOP, Jul 10, 2013, NOP to P
27, 2014', 'Pre-Draft Team': 'Baylor(Sr)', 'High School': 'Desert Pines High School[Las Vegas, Nevada]'}
{'Name': 'Justin James', 'URL': '/player/Justin-James/Summary/84791', 'Current Team': 'N/A', 'Born': 'Jan 24, 1997(25 years old)', 'Birthplace/Hometown': 'Porcie, Florida', 'Nationality': 'United States', 'Height': '6-7 (201cm)Weight:190 (86kg)', 'Current NBA Status': 'Unrestricted Free Agent', 'Agent': 'Austin Walraft Entry': '2019 NBA Draft', 'Early Entry Info': '2018 Early Entrant(Withdrew)', 'Drafted': 'Round 2, Pick 10, Sacramento Kings', 'Pre-Draft Team': 'WyomingHigh School': 'Oldsmar Christian Academy[Oldsmar, Florida]'}

表格中的预期输出:

“在纸中翻译的代码”

我试图弄清楚它(如下),但是继续遇到问题:

这可能吗?任何指导将不胜感激。这是我一生中最好的学习经历之一 - 但是我确实在这里撞到了砖墙。

import requests
from bs4 import BeautifulSoup

import gspread
gc = gspread.service_account(filename='creds.json')
sh = gc.open_by_key('1cD8mX8tR2iSQgSmpk6QxBVVHYVV8VxGs2uhtA8iBSpQ')
worksheet = sh.sheet1


def get_links(url):
    data = []
    req_url = requests.get(url)
    soup = BeautifulSoup(req_url.content, "html.parser")

    for td in soup.find_all('td', {'data-th': 'Player'}):
        a_tag = td.a
        name = a_tag.text
        player_url = a_tag['href']
        print(f"Getting {name}")

        req_player_url = requests.get(
            f"https://basketball.realgm.com{player_url}")
        soup_player = BeautifulSoup(req_player_url.content, "html.parser")
        div_profile_box = soup_player.find("div", class_="profile-box")
        row = {"Name": name, "URL": player_url}

        for p in div_profile_box.find_all("p"):
            try:
                key, value = p.get_text(strip=True).split(':', 1)
                row[key.strip()] = value.strip()
            except:     # not all entries have values
                pass

        data.append(row)

    return data


urls = [
    'https://basketball.realgm.com/dleague/players/2022',
    'https://basketball.realgm.com/dleague/players/2021',
    'https://basketball.realgm.com/dleague/players/2020',
]


for url in urls:
    print(f"Getting: {url}")
    data = get_links(url)

    for entry in data:
        worksheet.append_row(entry)

I'm very new to this, but have been piecing scripts together, living on YouTube, and getting help here to get this far.. I'm very close to my end goal, but can't seem to figure out what exactly is off:

The script below scrapes information from different links. I had different of versions of this, until someone helped finalize the best way (below):

import requests
from bs4 import BeautifulSoup


def get_links(url):
    data = []
    req_url = requests.get(url)
    soup = BeautifulSoup(req_url.content, "html.parser")

    for td in soup.find_all('td', {'data-th': 'Player'}):
        a_tag = td.a
        name = a_tag.text
        player_url = a_tag['href']
        print(f"Getting {name}")

        req_player_url = requests.get(
            f"https://basketball.realgm.com{player_url}")
        soup_player = BeautifulSoup(req_player_url.content, "html.parser")
        div_profile_box = soup_player.find("div", class_="profile-box")
        row = {"Name": name, "URL": player_url}

        for p in div_profile_box.find_all("p"):
            try:
                key, value = p.get_text(strip=True).split(':', 1)
                row[key.strip()] = value.strip()
            except:     # not all entries have values
                pass

        data.append(row)

    return data


urls = [
    'https://basketball.realgm.com/dleague/players/2022',
    'https://basketball.realgm.com/dleague/players/2021',
    'https://basketball.realgm.com/dleague/players/2020',
]


for url in urls:
    print(f"Getting: {url}")
    data = get_links(url)

    for entry in data:
        print(entry)

I then figured out how to push data to Google Sheets from various videos and made a test script that pushes different HREF tags to a G Sheet (below). I was really happy with this:

import requests
from bs4 import BeautifulSoup

import gspread
gc = gspread.service_account(filename='creds.json')
sh = gc.open_by_key('1cD8mX8tR2iSQgSmpk6QxBVVHYVV8VxGs2uhtA8iBSpQ')
worksheet = sh.sheet1


profiles = []
urls = [
    'https://basketball.realgm.com/dleague/players/2022',
    'https://basketball.realgm.com/dleague/players/2021'
    'https://basketball.realgm.com/dleague/players/2020'
    'https://basketball.realgm.com/dleague/players/2019'
]
for url in urls:
    req = requests.get(url)
    soup = BeautifulSoup(req.text, 'html.parser')
    for profile in soup.find_all('a'):

        profile = profile.get('href')

        profiles.append(profile)

# print(profiles)

for p in profiles:
    if p.startswith('/player'):
        print(p)
        AddValue = [p]
        worksheet.append_row(AddValue)

However, my real goal is to get the output of the FIRST script (current output shown below) pushing to a sheet, with the different values funneling into organized columns (like the image below the code).

{'Name': 'Khalil Iverson', 'URL': '/player/Khalil-Iverson/Summary/88527', 'Current Team': 'N/A', 'Born': 'Jul 19, 1997(24 years old)', 'Birthplace/Hometown': e, Ohio', 'Nationality': 'United States', 'Height': '6-5 (196cm)Weight:210 (95kg)', 'Current NBA Status': 'Unrestricted Free Agent', 'Agent': 'Mike Naiditch',
Entry': '2019 NBA Draft', 'Drafted': 'Undrafted', 'Pre-Draft Team': 'Wisconsin(Sr)', 'High School': 'Rutherford B. Hayes High School[Delaware, Ohio]'}
{'Name': 'Jarrett Jack', 'URL': '/player/Jarrett-Jack/Summary/366', 'Born': 'Oct 28, 1983(38 years old)', 'Birthplace/Hometown': 'Fort Washington, Maryland', lity': 'United States', 'Height': '6-3 (191cm)Weight:200 (91kg)', 'Hand': 'Right', 'Current NBA Status': 'Unrestricted Free Agent', 'Agent': 'Sam Goldfeder', ntry': '2005 NBA Draft', 'Early Entry Info': '2005 Early Entrant', 'Drafted': 'Round 1, Pick 22, Denver Nuggets', 'Draft Rights Trade': 'DEN to POR, Jun 28, 2re-Draft Team': 'Georgia Tech(Jr)', 'High School': 'Worcester Academy[Worcester, Massachusetts]'}
{'Name': 'Kadeem Jack', 'URL': '/player/Kadeem-Jack/Summary/24639', 'Current Team': 'N/A', 'Born': 'Oct 27, 1992(29 years old)', 'Birthplace/Hometown': 'Queenork', 'Nationality': 'United States', 'Height': '6-9 (206cm)Weight:225 (102kg)', 'Current NBA Status': 'Unrestricted Free Agent', 'Agent': 'Chris Emens,Josh Bd-Bell', 'Draft Entry': '2015 NBA Draft', 'Drafted': 'Undrafted', 'Pre-Draft Team': 'Rutgers(Sr)', 'High School': 'CJEOTO Academy[Somerset, New Jersey]'}
{'Name': 'Demetrius Jackson', 'URL': '/player/Demetrius-Jackson/Summary/43395', 'Current Team': 'N/A', 'Born': 'Sep 7, 1994(27 years old)', 'Birthplace/Hometoshawaka, Indiana', 'Nationality': 'United States', 'Height': '6-1 (185cm)Weight:200 (91kg)', 'Current NBA Status': 'Unrestricted Free Agent', 'Agent': 'Rade Fh', 'Draft Entry': '2016 NBA Draft', 'Early Entry Info': '2016 Early Entrant', 'Drafted': 'Round 2, Pick 15, Boston Celtics', 'Pre-Draft Team': 'Notre Dame(Jrh School': 'Marian High School[Mishawaka, Indiana]', 'AAU Team': 'MBA Select'}
{'Name': 'Josh Jackson', 'URL': '/player/Josh-Jackson/Summary/52032', 'Full Name': "Joshua O'Neal Jackson", 'Current Team': 'N/A', 'Born': 'Feb 10, 1997(25 ye', 'Birthplace/Hometown': 'San Diego, California', 'Nationality': 'United States', 'Height': '6-8 (203cm)Weight:207 (94kg)', 'Current NBA Status': 'Unrestrict
Agent (Sacramento Kings)', 'Agent': 'B.J. Armstrong', 'Draft Entry': '2017 NBA Draft', 'Early Entry Info': '2017 Early Entrant', 'Drafted': 'Round 1, Pick 4, 
Suns', 'Pre-Draft Team': 'Kansas(Fr)', 'High School': 'Prolific Prep[Napa, California]', 'AAU Team': 'One Nation'}
{'Name': 'Pierre Jackson', 'URL': '/player/Pierre-Jackson/Summary/28285', 'Current Team': 'N/A', 'Born': 'Aug 29, 1991(30 years old)', 'Birthplace/Hometown': as, Nevada', 'Nationality': 'United States', 'Height': '5-10 (178cm)Weight:180 (82kg)', 'Current NBA Status': 'Unrestricted Free Agent', 'Agent': 'Charles Misrrison Gaines', 'Draft Entry': '2013 NBA Draft', 'Drafted': 'Round 2, Pick 12, Philadelphia Sixers', 'Draft Rights Trade': 'PHL to NOP, Jul 10, 2013, NOP to P
27, 2014', 'Pre-Draft Team': 'Baylor(Sr)', 'High School': 'Desert Pines High School[Las Vegas, Nevada]'}
{'Name': 'Justin James', 'URL': '/player/Justin-James/Summary/84791', 'Current Team': 'N/A', 'Born': 'Jan 24, 1997(25 years old)', 'Birthplace/Hometown': 'Porcie, Florida', 'Nationality': 'United States', 'Height': '6-7 (201cm)Weight:190 (86kg)', 'Current NBA Status': 'Unrestricted Free Agent', 'Agent': 'Austin Walraft Entry': '2019 NBA Draft', 'Early Entry Info': '2018 Early Entrant(Withdrew)', 'Drafted': 'Round 2, Pick 10, Sacramento Kings', 'Pre-Draft Team': 'WyomingHigh School': 'Oldsmar Christian Academy[Oldsmar, Florida]'}

Expected output in Sheets:

Code translated in Sheets

I tried to figure it out (below), but keep getting issues:

Is this possible? Any guidance is greatly appreciated. This has been one of the best learning experiences of my life - but I've really hit a brick wall here.

import requests
from bs4 import BeautifulSoup

import gspread
gc = gspread.service_account(filename='creds.json')
sh = gc.open_by_key('1cD8mX8tR2iSQgSmpk6QxBVVHYVV8VxGs2uhtA8iBSpQ')
worksheet = sh.sheet1


def get_links(url):
    data = []
    req_url = requests.get(url)
    soup = BeautifulSoup(req_url.content, "html.parser")

    for td in soup.find_all('td', {'data-th': 'Player'}):
        a_tag = td.a
        name = a_tag.text
        player_url = a_tag['href']
        print(f"Getting {name}")

        req_player_url = requests.get(
            f"https://basketball.realgm.com{player_url}")
        soup_player = BeautifulSoup(req_player_url.content, "html.parser")
        div_profile_box = soup_player.find("div", class_="profile-box")
        row = {"Name": name, "URL": player_url}

        for p in div_profile_box.find_all("p"):
            try:
                key, value = p.get_text(strip=True).split(':', 1)
                row[key.strip()] = value.strip()
            except:     # not all entries have values
                pass

        data.append(row)

    return data


urls = [
    'https://basketball.realgm.com/dleague/players/2022',
    'https://basketball.realgm.com/dleague/players/2021',
    'https://basketball.realgm.com/dleague/players/2020',
]


for url in urls:
    print(f"Getting: {url}")
    data = get_links(url)

    for entry in data:
        worksheet.append_row(entry)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文