' nontype'对象没有属性' find_all'与美丽的小组
我尝试了我发现的这个代码,但是它给了我attributeError的错误消息:“ nontype”对象没有属性'find_all' 我不熟悉Beautifulsoup,也不知道如何解决此问题。试图找到一个我忽略Tabpane部分的解决方案,但无法弄清楚。 你有建议吗?
import datetime
import pandas as pd # pip install pandas
import requests # pip install requests
from bs4 import BeautifulSoup # pip install beautifulsoup4
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:87.0)
Gecko/20100101 Firefox/87.0',
}
url = 'https://www.marketwatch.com/tools/earningscalendar'
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.content, 'html.parser')
tabpane = soup.find('div', 'tabpane')
earning_tables = tabpane.find_all('div', {'id': True})
dfs = {}
current_datetime = datetime.datetime.now().strftime('%m-%d-%y %H_%M_%S')
xlsxwriter = pd.ExcelWriter('Earning Calendar
({0}).xlsx'.format(current_datetime), index=False)
for earning_table in earning_tables:
if not 'Sorry, this date currently does not have any earnings
announcements scheduled' in earning_table.text:
earning_date = earning_table['id'].replace('page', '')
earning_date = earning_date[:3] + '_' + earning_date[3:]
print(earning_date)
dfs[earning_date] = pd.read_html(str(earning_table.table))[0]
dfs[earning_date].to_excel(xlsxwriter, sheet_name=earning_date,
index=False)
xlsxwriter.save()
print('earning tables Excel file exported')
I have tried this code I found, however it gives me the error message of AttributeError: 'NoneType' object has no attribute 'find_all'
I am not familiar with Beautifulsoup and dont know how to fix this. tried to find a solution where I ignore the tabpane part, but could not figure it out.
Do you have any sugggestion?
import datetime
import pandas as pd # pip install pandas
import requests # pip install requests
from bs4 import BeautifulSoup # pip install beautifulsoup4
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:87.0)
Gecko/20100101 Firefox/87.0',
}
url = 'https://www.marketwatch.com/tools/earningscalendar'
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.content, 'html.parser')
tabpane = soup.find('div', 'tabpane')
earning_tables = tabpane.find_all('div', {'id': True})
dfs = {}
current_datetime = datetime.datetime.now().strftime('%m-%d-%y %H_%M_%S')
xlsxwriter = pd.ExcelWriter('Earning Calendar
({0}).xlsx'.format(current_datetime), index=False)
for earning_table in earning_tables:
if not 'Sorry, this date currently does not have any earnings
announcements scheduled' in earning_table.text:
earning_date = earning_table['id'].replace('page', '')
earning_date = earning_date[:3] + '_' + earning_date[3:]
print(earning_date)
dfs[earning_date] = pd.read_html(str(earning_table.table))[0]
dfs[earning_date].to_excel(xlsxwriter, sheet_name=earning_date,
index=False)
xlsxwriter.save()
print('earning tables Excel file exported')
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
要抓住页面中的所有表:
只需查看第一个:
如果您确定所有表都有相同的列,则可以将它们置为一个数据帧:
To grap all tables in page:
Just look at the first:
If you are sure all tables have same columns, you can concat them to have only one dataframe: