在毕达尼亚尼的构成砂纸? (或云9)

发布于 2025-02-10 19:23:08 字数 205 浏览 2 评论 0 原文

我可以在Pythonanywhere的自由级别上进行砂纸吗?我看了看,但尚未找到在那里安装它的说明。

如果不能在毕达尼亚的自由级别上运行,是否还有另一个在线环境可以在不需要在计算机上安装python和scrapy的零食?

编辑:我的问题几乎是关于pythonanywhere的,但是在找到问题的答案时,我遇到了cloud9,发现它是一个最好的选择,这在答案中可以解释。

Can I run Scrapy on the free level of PythonAnywhere? I've looked, but haven't found instructions for installing it there.

If it can't be run on the free level of PythonAnywhere, is there another online environment where I can run Scrapy without needing to install Python and Scrapy on my computer?

EDIT: My question was just about PythonAnywhere, but in finding the answer to the question, I came across Cloud9 and found it to be a preferable alternative, which is explained in the answer.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

亣腦蒛氧 2025-02-17 19:23:08

简短的摘要:

  • 废纸是预装在毕达尼亚的地方。无需安装。
  • 我找到了我更喜欢的替代方法:cloud9。我能够在上面安装砂纸,但是由于安全问题对我来说可能不会有问题。

=====================================

我的问题有三个部分:

  1. 我可以运行 scrapy pythonanywhere 这部分已经回答:是的,但是具有令人衰弱的限制。

其他两个部分尚未得到回答,但是我找到了一些答案,并将在这里分享。

  1. 其他哪些在线环境可以让我进行零工,而无需在计算机上安装python和scrapy?我没有找到直接答案,但是免费的教程网站 python for enlach (“ py4e”),有一个页面, 设置您的Python开发环境 ,其中列出了四个在线python环境。它提供了简短的教程,Cloud9和Codeanywhere。

这四个环境中没有一个关于在上面进行废弃的任何事情。有了更多的研究,我确实找到了如何在Pythonanywhere使用砂纸的方式,下面我为此解释了这一点。在其他三个中, cloud9 是亚马逊的AWS Suite的一部分,这是一组复杂的软件工具集我已经使用过以前的其他部分。基于此,我认为它也可以容纳废pa,我也检查了一下。我在我的问题中将下面的结果添加为新的第4部分。

  1. 现在,我的问题的主要部分是:如何在毕达山脉的地方安装砂纸?答案是:
  • 您不这样做。它已经安装了!

令人惊讶的是,Pythonanywhere的出色文档对此一无所知。我通过以下说明发现了这一点,我希望我能使我安装零食:

  • 首先,因为我是python的新手(但不是编程),所以我跑过Py4e的 bash unix shell代替python internement(而不是python instell(“ $”) >“),让我将一个简单的程序保存到文件中。

  • 接下来,我去了scrapy的安装指令。它具有以下精彩的行:“ ...如果您已经熟悉Python软件包的安装,则可以使用: pip安装scrapy ”安装scrapy及其依赖项。当然,如果我不熟悉该怎么办,这并不能说明这一点。叹息!

  • 之后,我以某种方式在 noreferrer“>” noreferrer“>” em>安装软件包 ,首先要解释“软件包”的意思是“要安装的软件”,所以我认为这可能包括砂纸。因此,我跑了那里的指示,大约一半了,它告诉我跑步:

python3 -m pip install "SomeProject"

(*下面的脚注在该命令的语法上)

说明,“某些项目”应该是一个包含在 python软件包索引,所以我去了那里并搜索了scrapy。它为我提供了681个项目的列表,其中有一些名称为“ scrapy”,其中一些看起来像是各种废品本身。它们都不被称为“签证”,但是上面引用的纸巾说明说只是使用该名称。所以我屏住呼吸并进入:

python3 -m pip install Scrapy

猜猜我得到了什么?毕达尼亚(Pythonanywhere)告诉我:

Requirement already satisfied: Scrapy in /usr/local/lib/python3.9/site-packages (2.5.0)

随后是几十条线,全部始于“需求已经满足”,我认为这是零工所要求的依赖关系,所有这些都已经存在并准备好滚动。

  • 那么,嗯,废纸已经存在了吗?要找出这是否真的是真的,我去了 tutorial> tutorial 废纸的网站。它说的第一件事是通过使用命令来创建一个项目:
scrapy startproject tutorial

我输入了该项目,而Pythonanywhere告诉我,它已经成功创建了一个新项目。由于这是一个砂纸命令,所以我得出的结论是,是的,的确,我已经在pythonanywhere上安装并运行了砂纸。 无需安装!

  1. 云9? .com/cloud9/“ rel =” nofollow noreferrer“> cloud9 ,我很感兴趣,因为它是(“ AWS”)。我之前曾使用过其他AWS的其他部分,发现它们是精致,复杂,功能强大且有据可查的。他们也很经济。

AWS是亚马逊经营的商业系统。它根据用法收取费用,没有最低限度,并且使用量低。 cloud9 的定价页面表明这不例外。 Cloud9本身可以免费使用,但是在具有费用的其他AWS资源上使用它。

定价页面给出了以下示例:“如果您使用默认设置每天运行4个小时,每月20天,每月30分钟的自动途径将每月收费设置为90小时的使用情况。 。这是每月最低成本的一半。 (如 GILES THOMAS 答案,pythonanywhere的免费水平对刮擦不是很有用。不确定Cloud9示例中的使用量与Pythonanywhere $ 5/mo服务允许的使用量相比,但是我的用法将远低于任何一个,所以我希望我的使用Cloud9的成本非常低,可能什么也没有。此外,如果我每年只使用零食来进行一个项目,而在pythonanywhere中,我必须在项目之间关闭我的帐户才能停止收费,但是当我不使用它时,AWS不收取我的费用,因此我可以将帐户保留在项目之间没有成本的情况下。

因此,基于我使用过的AWS模块的质量和低使用成本,我对Cloud9非常感兴趣。

我并不感到惊讶,发现我可以在其中使用砂纸。

为了弄清楚这一点,我迅速放弃了网页指令,以从文档页面。综合= 595页!但是它的组织良好和交叉引用,因此我能够通过阅读20页来了解我需要的内容,其中包括有关使用GUI环境的教程(PG 29..38),另一个关于在Cloud9中使用Python的教程(PG9(PG) 423..7)。

在第二个教程中,我运行了:

  • python3 - version 以找出已经安装了Python,版本3.7.10。
  • Python -M Pip -Version 找出 pip 版本20.2.2正在运行。

在该教程之后,我准备找出是否存在废料。那时我已经了解了 pip show ,所以我运行了:

  • python -m pip show show scrapy

答案是否:

  • 警告:找不到包装: scrapy

所以我重复了我在pythonanywhere之前完成的命令:

  • python3 -m pip安装scrass scrapy

这次,很少有“要求已经满足”,而是有一个许多“收集...下载” S,然后是“安装收集的软件包”,然后“成功安装”,其中包括Scrapy-2.6.1在内的长列表。

我重复了 python -M Pip Show show scrapy ,并获得了几行输出,告诉我Scrapy 2.6.1已安装。最后,我进行了与官方纸巾教程中的第一个指令之前在毕粉旁边进行的相同测试:

scrapy startproject tutorial

并获得了与以前相同的输出,告诉我该项目已经创建。

宾果!我在Cloud9中运行砂纸。

负面的一面,这里有一个问题。 AWS具有两个级别的登录权限,称为root用户和IAM。为了获得适当的安全性,我应该以IAM用户运行Cloud9,但是有一个问题能够以这种方式签名。我发布了关于这个,但是在等待答案的同时,开始使用cloud9作为root用户。在此过程中,我收到了信息:

警告:使用根特权运行PIP安装通常不是一个好主意。

该警告提出了一个替代命令的建议,该命令没有意义,并且在我尝试时不起作用。因此,我不确定我在这里所做的事情使我的AWS帐户的安全性弄乱了多少。我的工作不是秘密的,因此安全性可能是不是问题的,但是我仍然想弄清楚如何以IAM用户的身份进行并清理我可能因我所做的事情而造成的任何损害root用户。如果有人知道这一点,请回答上一段中有关的问题。

因此,现在我已经在Cloud9中运行了废品,我将去找出它是否可以获取所需的数据。如果在云9方面有任何惊喜(a)无法做某事或(b)导致意外收费,我将在此处进行另一次编辑。

===============================

(*) python3 -m pip的语法上的脚注安装“ SomeProject”
自从我在一个叫做Pythonanywhere的东西工作以来,我很想认为这是Python命令。但是后来我不得不记住,在pythonanywhere中,我在 python3 是一个unix命令。我还没有找到该确切命令的文档,但是命令它大概是基于 python 。该文档说:“ -M模块名称搜索... for命名模块,并将相应的.py文件作为脚本运行。”因此,这意味着 pip 是用Python编写的用于安装Python软件包的Unix模块。然后 install<项目名称> pip 模块的参数。 (如果我说错了,请有人纠正我。)

Short summary:

  • Scrapy comes preinstalled on PythonAnywhere. No installation required.
  • I found an alternative that I like better: Cloud9. I was able to install Scrapy on it, but with a security issue that probably won't be a problem for me.

====================================

There were three parts to my question:

  1. Can I run Scrapy in the free level of PythonAnywhere? This part has been answered: Yes, but with debilitating restrictions.

The other two parts have not been answered, but I've found some answers and will share them here.

  1. What other online environments allow me to run Scrapy without needing to install Python and Scrapy on my computer? I haven't found a direct answer to this, but the free tutorial website, Python for Everybody ("Py4E"), has a page, Setting up your Python Development Environment, which lists four online Python environments. It provides a brief tutorial on PythonAnywhere and then just provides links to the other three: Trinket, Cloud9, and CodeAnywhere.

None of those four environments say anything about running Scrapy on them. With some more research, I did find out how to use Scrapy in PythonAnywhere, which I explain next below. Of the other three, Cloud9 is part of Amazon's AWS suite, which is a sophisticated set of software tools that I've used other parts of before. Based on that, I assumed it also accommodates Scrapy, and I checked it out as well. I've added the results of that below as a new part 4 to my question.

  1. Now, the main part of my question: How to install Scrapy on PythonAnywhere? The answer is:
  • You don't. It's already installed!

It's amazing that PythonAnywhere's otherwise excellent documentation doesn't say anything about this. I found it out by following instructions that I hoped would lead me to installing Scrapy:

  • First, since I'm new to Python (but not to programming), I ran through Py4E's tutorial on PythonAnywhere, which is really a quick introduction to Python and got me to write a simple program, told me to use the Bash Unix shell instead of the Python interpreter ("$" instead of ">>>"), and had me save a simple program to a file.

  • Next, I went to Scrapy's installation instructions. It has this wonderful line: "... if you’re already familiar with installation of Python packages, you can install Scrapy and its dependencies from PyPI with: pip install Scrapy". Of course, it doesn't follow that by saying what to do if I'm not familiar with that. Sigh!

  • After that, I somehow found my way to Python's official instructions on Installing Packages, which starts by explaining that "package" means "a bundle of software to be installed", so I thought that might include Scrapy. So I ran through the instructions there, and about half-way through, it told me to run:

python3 -m pip install "SomeProject"

(* Footnote below on syntax of that command)

The instructions said that "SomeProject" is supposed to be a project that's included in the Python Package Index, so I went there and searched for Scrapy. It gave me a list of 681 projects with "scrapy" in the name, and some of them looked like they might be various versions of Scrapy itself. None of them were called just "Scrapy", but the Scrapy instruction quoted above said to use just that name. So I held my breath and entered:

python3 -m pip install Scrapy

And guess what I got? PythonAnywhere told me:

Requirement already satisfied: Scrapy in /usr/local/lib/python3.9/site-packages (2.5.0)

That was followed by a couple of dozen more lines that all started with "Requirement already satisfied", which I took to be the dependencies required by Scrapy, all of them already present and ready to roll.

  • So, hmmm, Scrapy is already there? To find out if that's really true, I went to the tutorial on Scrapy's website. The first thing it said was to create a project by using the command:
scrapy startproject tutorial

I entered that, and PythonAnywhere told me that it had successfully created a new project. Since this was a Scrapy command, I conclude that, yes, indeed, I already have Scrapy installed and running on PythonAnywhere. No installation necessary!

  1. What about Cloud9? As I said above in my answer to part 2, when I found out about Cloud9, I was interested because it's part of Amazon Web Services ("AWS"). I've used other parts of AWS before and found them to be sophisticated, complicated, powerful, and well-documented. They are also very economical.

AWS is a commercial system run by Amazon. It charges fees based on usage, with no minimums, and with low-volume usage being free. The pricing page for Cloud9 shows it to be no exception. Cloud9 itself is free to use, but using it calls on other AWS resources that have charges.

The pricing page gives the following example: "If you use the default settings running an IDE for 4 hours per day for 20 days in a month with a 30-minute auto-hibernation setting your monthly charges for 90 hours of usage would be ... $2.05". That's less than half the lowest monthly cost of PythonAnywhere. (As stated in the answer by Giles Thomas, the free level of PythonAnywhere is not very useful for Scrapy.) I'm not sure how the amount of usage in the Cloud9 example compares with the amount of usage allowed by PythonAnywhere's $5/mo service, but my usage is going to be a lot less than either one, so I expect my cost of using Cloud9 to be very low, and possibly nothing. Furthermore, if I only use Scrapy for a project a couple of times a year, with PythonAnywhere, I'd have to close my account in between projects to stop being charged, but AWS doesn't charge me when I'm not using it, so I can keep the account with no cost between projects.

So based on both the quality of the AWS modules I've used and the low usage cost, I was very interested in Cloud9 as an alternative.

And I was not surprised to find that I could use Scrapy in it.

To figure that out, I quickly abandoned the webpage instructions in favor of downloading a pdf of the comprehensive User Guide from the documentation page. Comprehensive = 595 pages! But it's very well organized and cross-referenced, so I was able to learn what I needed by reading about 20 pages, which included a tutorial on using the GUI environment (pg 29..38) and another on using Python in Cloud9 (pg 423..7).

In that second tutorial, I ran:

  • python3 --version to find out that Python was already installed, version 3.7.10.
  • python -m pip --version to find out that pip version 20.2.2 is running.

After that tutorial, I was ready to find out if Scrapy is there. I had learned by then about pip show, so I ran:

  • python -m pip show Scrapy

The answer was no:

  • WARNING: Package(s) not found: Scrapy

So I repeated the command that I'd done earlier in PythonAnywhere:

  • python3 -m pip install Scrapy

This time, there were very few "Requirement already satisfied"s and instead there were a lot of "Collecting ... Downloading"s, followed by "Installing collected packages" and then "Successfully installed" with a long list that included Scrapy-2.6.1.

I repeated python -m pip show Scrapy and got several lines of output that told me Scrapy 2.6.1 is installed. Finally, I ran the same test I'd run before in PythonAnywhere, the first instruction in the official Scrapy tutorial:

scrapy startproject tutorial

and got the same output as before, telling me that the project had been created.

Bingo! I have Scrapy running in Cloud9.

On the negative side, there was a problem here. AWS has two levels of sign-in authority, called root users and IAM. For proper security, I should be running Cloud9 as an IAM user, but there was a problem being able to sign in that way. I posted a question on SO about that, but while waiting for an answer, went ahead and started using Cloud9 as the root user. In the course of that, I got the message:

WARNING: Running pip install with root privileges is generally not a good idea.

That warning came with a suggestion of an alternative command that didn't make sense and didn't work when I tried it. So I'm not sure how much I've messed up the security of my AWS account by what I've been doing here. My work is not secretive, so the security may be a non-issue, but I'd still like to figure out how to proceed as an IAM user and clean up any damage I might have caused by what I've been doing as the root user. If anyone knows about that, please respond to the SO question about it linked in the previous paragraph.

So now I've got Scrapy running in Cloud9, and I'm going to go find out if it can get the data I need. I'll make another edit here if there are any surprises in terms of Cloud9 either (a) not being able to do something or (b) resulting in unexpected charges.

====================================

(*) Footnote on syntax of python3 -m pip install "SomeProject":
Since I was working in something called PythonAnywhere, I was tempted to think that this was a Python command. But then I had to remember that, within PythonAnywhere, I was working in Bash, a Unix shell. So Python3 is a Unix command. I haven't found documentation of that exact command, but did of a command it's presumably based on Python. That documentation says, "-m module-name Searches ... for the named module and runs the corresponding .py file as a script." So this means that pip is a Unix module written in Python for installing Python packages. Then install <project name> is a parameter of the pip module. (Somebody please correct me if I've said any of that wrong.)

屋顶上的小猫咪 2025-02-17 19:23:08

您可以,但是pythonanywhere上的免费帐户仅限于官方公共apis的白名单,因此,您可能无法访问非API网站。

You can, but free accounts on PythonAnywhere are limited to accessing sites on a whitelist of official public APIs, so you will probably not be able to access non-API sites.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文