熵的计算机科学定义是什么?

发布于 2024-07-12 23:25:23 字数 113 浏览 9 评论 0原文

我最近在大学开始了一门关于数据压缩的课程。 然而,我发现“熵”一词在计算机科学中的使用相当模糊。 据我所知,它大致可以翻译为系统或结构的“随机性”。

计算机科学“熵”的正确定义是什么?

I've recently started a course on data compression at my university. However, I find the use of the term "entropy" as it applies to computer science rather ambiguous. As far as I can tell, it roughly translates to the "randomness" of a system or structure.

What is the proper definition of computer science "entropy"?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(15

緦唸λ蓇 2024-07-19 23:25:24

熵是指软件根据客户需求偶尔进行重构的程度,因此为了满足客户需求而进行重构的成本变得最大。

entropy refers to the extent where a software is reshaped occasionally basing on customer requirements hence the cost for reshaping it to meet customer reqrments becomes maximum.

半葬歌 2024-07-19 23:25:24

我听说人们误用了关于 CS 的熵的热力学定义。

例如,这个系统中的熵肯定会增加。

他们的意思是这段代码变得越来越糟糕!

I've heard people misuse the thermodynamic definitions of entropy w.r.t CS.

E.g. Entropy is definitely increasing in this system.

When what they mean is this code is getting worse and worse!

何其悲哀 2024-07-19 23:25:23

熵可以有不同的含义:

计算

在计算中,熵是
由操作收集的随机性
系统或应用程序用于
密码学或其他用途
需要随机数据。 这种随机性
通常从硬件收集
来源,要么是预先存在的,例如
作为鼠标移动或特别
提供随机生成器。

信息论

在信息论中,熵是
相关不确定性的度量
与一个随机变量。 该术语由
在这种情况下,它本身通常指的是
香农熵,其中
量化,在某种意义上
期望值,信息
包含在消息中,通常在
单位,如位。 等价地,
香农熵是衡量
平均信息内容一是
当一个人不知道的时候失踪
随机变量的值

数据压缩中的熵

数据压缩中的熵可能表示您输入压缩算法的数据的随机性。 熵越大,压缩比越小。 这意味着文本越随机,可以压缩的程度就越小。

香农的熵代表
最好的可能的绝对限制
任何无损压缩
沟通:将消息视为
编码为一系列独立的
和同分布随机数
变量,香农的源代码
定理表明,在极限情况下,
最短的平均长度
编码的可能表示
给定字母表中的消息是他们的
熵除以对数
目标中的符号数量
字母表。

Entropy can mean different things:

Computing

In computing, entropy is the
randomness collected by an operating
system or application for use in
cryptography or other uses that
require random data. This randomness
is often collected from hardware
sources, either pre-existing ones such
as mouse movements or specially
provided randomness generators.

Information theory

In information theory, entropy is a
measure of the uncertainty associated
with a random variable. The term by
itself in this context usually refers
to the Shannon entropy, which
quantifies, in the sense of an
expected value, the information
contained in a message, usually in
units such as bits. Equivalently, the
Shannon entropy is a measure of the
average information content one is
missing when one does not know the
value of the random variable

Entropy in data compression

Entropy in data compression may denote the randomness of the data that you are inputing to the compression algorithm. The more the entropy, the lesser the compression ratio. That means the more random the text is, the lesser you can compress it.

Shannon's entropy represents an
absolute limit on the best possible
lossless compression of any
communication: treating messages to be
encoded as a sequence of independent
and identically-distributed random
variables, Shannon's source coding
theorem shows that, in the limit, the
average length of the shortest
possible representation to encode the
messages in a given alphabet is their
entropy divided by the logarithm of
the number of symbols in the target
alphabet.

笑叹一世浮沉 2024-07-19 23:25:23

我最喜欢的定义更注​​重实用性,可以在《实用程序员》这本优秀著作的第一章中找到:从熟练工到大师,作者:安德鲁·亨特 (Andrew Hunt) 和大卫·托马斯 (David Thomas):

软件熵

虽然软件开发是免疫的
从几乎所有的物理定律来看,熵
对我们打击很大。 熵是一个术语
物理学是指物质的量
系统中的“紊乱”。 很遗憾,
热力学定律保证
宇宙中的熵趋于
趋向于最大值。 当紊乱时
软件、程序员的增加
称之为“软件腐烂”。

影响因素有很多
导致软件腐烂。 最多
重要的似乎是
心理学或文化在起作用
项目。 即使你们是一个团队
一,你的项目的心理可以是
一个非常微妙的事情。 尽管
最周密的计划和最优秀的人员,
项目仍然可能会遭到破坏并且
在其生命周期内腐烂。 然而还有
还有其他项目,尽管
巨大的困难和持续的
挫折,成功地对抗自然
倾向于无序并设法
结果很好。

...

...

窗户坏了。

一扇破损的窗户,未修理
任何相当长的时间,
向当地居民灌输
建立一种被遗弃的感觉——a
感觉当权者不
关心建筑物。 那么另一个
窗户被打破。 人们开始
乱扔垃圾。 出现涂鸦。 严肃的
结构性损坏开始。 在一个
相对较短的时间间隔,
建筑物受损程度超出
业主修复它的愿望,以及
被遗弃的感觉变成了现实。

“破窗理论”
启发了新州警察部门
约克等主要城市有待破解
注重小事
把大东西挡在外面。 有用:
保持在破碎的窗户上,
涂鸦和其他小违规行为
降低了严重犯罪水平。

提示4

不要忍受破损的窗户

不要留下“破窗户”(不好
设计、错误的决定或糟糕的
代码)未修复。 尽快修复每一个
正如它被发现的那样。 如果有
没有足够的时间来正确修复它,
然后将其登上。 也许你可以
注释掉有问题的代码,或者
显示“未实施”消息,
或者用虚拟数据代替。 拿
采取一些措施来防止进一步的损害
并表明您处于领先地位
情况。

文本取自: http://pragprog.com/the-pragmatic-programmer/extracts /软件熵

My favorite definition, with a more practical focus, is found in Chapter 1 of the excellent book The Pragmatic Programmer: From Journeyman to Master by Andrew Hunt and David Thomas:

Software Entropy

While software development is immune
from almost all physical laws, entropy
hits us hard. Entropy is a term from
physics that refers to the amount of
"disorder" in a system. Unfortunately,
the laws of thermodynamics guarantee
that the entropy in the universe tends
toward a maximum. When disorder
increases in software, programmers
call it "software rot."

There are many factors that can
contribute to software rot. The most
important one seems to be the
psychology, or culture, at work on a
project. Even if you are a team of
one, your project's psychology can be
a very delicate thing. Despite the
best laid plans and the best people, a
project can still experience ruin and
decay during its lifetime. Yet there
are other projects that, despite
enormous difficulties and constant
setbacks, successfully fight nature's
tendency toward disorder and manage to
come out pretty well.

...

...

A broken window.

One broken window, left unrepaired for
any substantial length of time,
instills in the inhabitants of the
building a sense of abandonment—a
sense that the powers that be don't
care about the building. So another
window gets broken. People start
littering. Graffiti appears. Serious
structural damage begins. In a
relatively short space of time, the
building becomes damaged beyond the
owner's desire to fix it, and the
sense of abandonment becomes reality.

The "Broken Window Theory" has
inspired police departments in New
York and other major cities to crack
down on the small stuff in order to
keep out the big stuff. It works:
keeping on top of broken windows,
graffiti, and other small infractions
has reduced the serious crime level.

Tip 4

Don't Live with Broken Windows

Don't leave "broken windows" (bad
designs, wrong decisions, or poor
code) unrepaired. Fix each one as soon
as it is discovered. If there is
insufficient time to fix it properly,
then board it up. Perhaps you can
comment out the offending code, or
display a "Not Implemented" message,
or substitute dummy data instead. Take
some action to prevent further damage
and to show that you're on top of the
situation.

Text taken from: http://pragprog.com/the-pragmatic-programmer/extracts/software-entropy

终弃我 2024-07-19 23:25:23

我总是遇到香农熵意义上的熵。

来自 http://en.wikipedia.org/wiki/Information_entropy

在信息论中,熵是与随机变量相关的不确定性的度量。 在这种情况下,该术语本身通常指香农熵,它从期望值的意义上量化消息中包含的信息,通常以比特等单位。 同样,香农熵是当人们不知道随机变量的值时所丢失的平均信息内容的度量。

I always encountered entropy in the sense of Shannon Entropy.

From http://en.wikipedia.org/wiki/Information_entropy:

In information theory, entropy is a measure of the uncertainty associated with a random variable. The term by itself in this context usually refers to the Shannon entropy, which quantifies, in the sense of an expected value, the information contained in a message, usually in units such as bits. Equivalently, the Shannon entropy is a measure of the average information content one is missing when one does not know the value of the random variable.

痴情 2024-07-19 23:25:23

替代文本
(来源:mit.edu)

来自 墨西哥大学

信息论的概念
熵是对
物理概念。 有很多方法
来描述熵。 这是一个措施
随机性的随机性
多变的。 这也是衡量
信息量随机
变量或随机过程
包含。 这也是一个下界
一条消息的数量
压缩的。 最后是
是/否问题的平均数量
需要询问随机
实体来确定其价值。

概率计算示例应用程序中的熵方程:

它是 rv 所有值的总和
该值的概率乘以
该问题的日志(即
p(x)logp(x))。 这个方程可以是
源于第一原则
信息的属性。

alt text
(source: mit.edu)

from University of Mexico

The information theoretic notion of
Entropy is a generalization of the
physical notion. There are many ways
to describe Entropy. It is a measure
of the randomness of a random
variable. It is also a measure of the
amount of information a random
variable or stochastic process
contains. It is also a lower bound on
the amount a message can be
compressed. And finally it is the
average number of yes/no questions
that need to be asked about an random
entity to determine its value.

Equation for Entropy in a sample application for probability calculation:

it is the sum over all values of a rv
of the probability of that value times
the log of that prob(i.e.
p(x)logp(x)). This equation can be
derived from first principles of the
properties of information.

乖乖公主 2024-07-19 23:25:23

这是信息论中熵的一个很好的替代解释。

熵是衡量不确定性的指标
预测

我们还可以将熵描述为如果我们在做出初步预测后得到结果,我们会感到多么惊讶。

假设我们有一枚弯曲的硬币,99% 的情况下是正面,1% 的情况下是反面。 由于得到尾巴的可能性只有百分之一,所以如果我们真的得到尾巴,我们会感到非常惊讶。 另一方面,如果我们获得了正面,也不会太令人惊讶,因为我们已经有 99% 的机会获得正面。

假设我们有一个名为 Surprise(x) 的函数,它可以为我们提供每个结果的惊喜量; 然后我们可以对概率分布上的意外量进行平均。 这个平均惊喜量也可以用来衡量我们的不确定性。 这种不确定性称为

更新:

我制作此可视化是为了描述动物图像分类器模型(机器学习)中预测类的熵和置信度之间的关系。 这里被用作分类器模型对其预测的置信度的度量

熵为置信度测量
该图显示了两个分类器模型预测的熵值的比较。 右图以相对较高的置信度(较低的熵)预测马的图像,而左侧的分类器无法真正区分(较高的熵)它是马、牛还是长颈鹿。

Here is a great alternate explanation for entropy in information theory.

Entropy is a measure of uncertainty involved in making a
prediction
.

We can also describe entropy as how surprised we would be if we get an outcome after we made our initial prediction.

Lets say we have a bent coin that gives us a head 99% of the time and a tail 1% of the time. Since there is only a one percent chance of getting a tail, we would be very surprised if we actually get a tail. On the other hand, it won't be too surprising if we got a head as we already have a 99 percent chance of getting a head.

lets assume that we have a function called Surprise(x) that would give us the amount of surprise for each outcome; then we can average the amount of surprise on a probability distribution. This average amount of surprise could also be used as a measure for how uncertain we are. This uncertainty is called entropy.

UPDATE:

I made this visualization to describe relationship between entropy and confidence of the predicted class in an animal image classifier model (machine learning). Here the entropy is used as a measure of how confident the classifier model is in its prediction.

entropy as a confidence measure
The diagrams show a comparison of entropy values of predictions from two classifier models. The diagram on the right predicts image of a horse with a relatively high confidence (lower entropy) while the classifier on the left can not really distinguish (higher entropy) whether it's a Horse, a Cow, or a Giraffe.

暖伴 2024-07-19 23:25:23

超级简单的定义

熵这个词可以用一句话来定义:

“描述一个系统所需的信息量。”

想象一下宇宙膨胀的例子:从一开始,所有物质在大爆炸之前都聚集在一个小点中,所以我们可以用“所有物质都在一个点内”来描述这个系统。 虽然今天需要更多的信息来描述系统(即宇宙),但我们需要描述所有行星的位置、它们的运动、它们上面有什么等等。
从信息论的角度来看,这个定义也是有效的:例如:您在密码(系统)中添加的字母越多,描述密码所需的信息就越多。 然后你可以用不同的单位来测量它,例如位或字符,例如
“hello” = 5 个字符熵 = 40 位熵(如果 charsize 为 8 位)。

由此还得出,您拥有的信息越多,您可以安排该信息的方式就越多。如果您有 40 位,则可以有 2^40 种不同的方式来安排它们。 如果我们在这里谈论密码,那么信息(位)的可能排列越多,破解(使用暴力或字典攻击)所需的时间就越长。

Super SIMPLE definition

The word entropy can be defined in one sentence:

"The amount of information needed to describe a system."

Imagine for an example the expansion of the universe: From the beginning, all matter was collected in a small point before the big bang, so we could have described the system with "all matter is within one point." While today significantly more information is required to describe the system (the Universe, that is), one would need to describe all planetary positions, their movement, what's on them etc..
In terms of information theory, the definition also works: E.g: The more letters you add to a password (the system), the more information is needed to describe the password. Then you can measure it in different units, eg bits or characters, like
"hello" = 5 characters entropy = 40 bits of entropy (if charsize is 8 bits).

From this also comes that the more information you have the more ways you can arrange that information in. If you have 40 bits there are 2^40 different ways they can be arranged. If we are talking passwords here then the more possible arrangements of the information (bits) the longer it is going to take cracking (with brute force or dictionary attacks).

浅忆 2024-07-19 23:25:23

就压缩和信息论而言,源的熵是来自源的符号可以传达的平均信息量(以比特为单位)。 通俗地说,一个符号越不可能,它的出现带来的惊喜就越大。

如果您的源有两个符号,例如 AB,并且它们的可能性相同,则每个符号传达相同数量的信息(一位)。 具有四个同样可能的符号的源每个符号传送两位。

举一个更有趣的例子,如果您的源具有三个符号:ABC,其中前两个的可能性是第三个,那么第三个更令人惊讶,但可能性也较小。 该源的净熵为 1.52,计算如下。

您将熵计算为“平均惊喜”,其中每个符号的“惊喜”是其概率乘以概率的负二进制对数:(

                            binary
symbol  weight  probability   log    surprise
  A        2        0.4      -1.32    0.53
  B        2        0.4      -1.32    0.53
  C        1        0.2      -2.32    0.46
total      5        1.0               1.52

当然)使用二进制对数的负数,因为值在 0 到 1 之间的对数(不包括)为负数。

In terms of compression and information theory, the entropy of a source is the average amount of information (in bits) that symbols from the source can convey. Informally speaking, the more unlikely a symbol is, the more surprise its appearance brings.

If your source has two symbols, say A and B, and they are equally likely, then each symbol conveys the same amount of information (one bit). A source with four equally likely symbols conveys two bits per symbol.

For a more interesting example, if your source has three symbols, A, B, and C, where the first two are twice as likely as the third, then the third is more surprising but is also less likely. There's a net entropy of 1.52 for this source, as calculated below.

You calculate entropy as the "average surprise", where the "surprise" for each symbol is its probability times the negative binary log of the probability:

                            binary
symbol  weight  probability   log    surprise
  A        2        0.4      -1.32    0.53
  B        2        0.4      -1.32    0.53
  C        1        0.2      -2.32    0.46
total      5        1.0               1.52

The negative of the binary log is used (of course) because logs of values between 0 and 1 (exclusive) are negative.

柳若烟 2024-07-19 23:25:23

简而言之,熵定义了随机性。 这更像是某事是多么不可预测。 用更专业的话说,“在计算中,熵是操作系统或应用程序收集的随机性,用于密码学或需要随机数据的其他用途。 这种随机性通常是从硬件源收集的,要么是预先存在的硬件源,例如鼠标移动,要么是专门提供的随机性生成器。” 正如维基百科所定义的。

现在,人们可以轻松地得出文件熵的含义,即文件中字节的无序程度的度量。 有多种单位用于定义熵,例如 nat、shannon 或 hartley。 最常用的单位是香农。 根据香农算法,文件的熵值的范围必须为 0 到 8。因此,当熵值为零时,可以说结果是确定的。 相反,当熵值为 8 时,结果是最不可预测的。 Shannon 给出的衡量事件结果随机性的公式为:

          Entropy = ∑ pi log(1/pi)

其中 i 是概率为 pi 的事件。

该方程的结果始终在 0 到 8 之间。

有关更多信息,请访问链接:https://www.talentcookie.com/2016/02/file-entropy-in-malware-analysis/

In simpler words, Entropy defines randomness. It’s more like how unpredictable something is. In more technical words, “In computing, entropy is the randomness collected by an operating system or application for use in cryptography or other uses that require random data. This randomness is often collected from hardware sources, either pre-existing ones such as mouse movements or specially provided randomness generators.” as defined by wikipedia.

One can now easily conclude the meaning of entropy in respect to a file as the measurement of the how much disordered the bytes are in a file. There are various units used for defining entropy like nat, shannon or hartley. Well, most common unit used is Shannon. The range of values a file’s entropy must come in as per Shannon’s algorithm is 0 to 8. So, when the entropy value is zero, one can say the outcome is certain. On contrary, when the entropy value is 8, the outcome is most unpredictable it could be. The formula given by Shannon to measure randomness in outcome of events is:

          Entropy = ∑ pi log(1/pi)

where i is the event with probability pi.

This equation will always result in between 0 to 8.

For more information, go through the link: https://www.talentcookie.com/2016/02/file-entropy-in-malware-analysis/

尐偏执 2024-07-19 23:25:23

对于病毒研究人员来说,熵也就像一个哈希码。 获得的熵越少,则意味着它可能是加密或压缩的代码,可能是病毒。

标准二进制文件的熵比压缩或加密的二进制文件更高。

Entropy is like a hash code for virus researchers as well. Less entropy you get, it would mean that it is likely encrypted or compressed code which could be potentially be a virus.

A standard binary would have a higher entropy than a compressed or encrypted one.

甜味超标? 2024-07-19 23:25:23

熵在计算机科学中通常具有多种含义。 这取决于上下文。 在安全领域,熵意味着您放置了多少随机性,例如,当您生成私钥时,许多应用程序要求您移动鼠标以生成熵。 这通过采用随机性的“人类”元素来生成熵,并将其添加到生成密钥的哈希过程中。

现在软件工程也有了熵的定义。 此定义代表过时的代码,或由许多开发人员编写的代码。 通常用于指即将重构软件项目的情况。 “这个项目的代码具有巨大的熵,因为许多维护它的人目前并不在该项目中”。

这是我也记得的第三个示例用法。 在模拟退火主题中(就计算机科学而言),熵被描述为算法评估过程中发生了多少衰减。

我想回答你的问题,除了你可以在字典中找到的定义之外,“熵”这个词没有具体的定义。 计算机科学如何应用该术语取决于所使用术语的上下文及其应用范围。

Entropy has many meanings typically in Computer Science. It depends on the context. In security entropy means how much randomality you place, for instance when you generate a private key many applications ask you to move the mouse around to generate entropy. This generates entropy by taking the "human" element of randomality and adds it to the hashing process of generating the key.

Now there is also a defnition for software engineering of entropy. This definition represents out of date code, or code that has had many developers writing it. Typically used in reference to when it is near time to refactor your software project. "The code for this project has an enourmous amount of entropy because many of the individuals who maintained it are not on the project currently".

Here is a third example usage that I remembered too. In the topic of simulated annealing (as far as computer science is concerned), entropy is described as how much decay has happened during the evaluation of the algorithm.

I guess to answer your question though, there is not a concrete definition of the word 'entropy' except for the ones that you can find in a dictionary. How computer science tends to apply that term depends on the context of the term being used and what it is being applied to.

人生戏 2024-07-19 23:25:23

利用熵很容易小题大做。 在我看来,这是一个非常简单而有用的概念< /a>.

基本上,它量化了您将从事件中平均学到的东西,例如抛硬币、采用分支指令或对数组进行索引。

就像搜索算法中间的比较操作一样,有一定的概率 P 采取一个分支,1-P 采取另一个分支。

假设 P 是 1/2,就像二分查找一样。 然后,如果您选择该分支,您将比以前多了解 1 位,因为 log(2/1)(基数 2)为 1。另一方面,如果您选择另一个分支,您也会学习 1 位。

要获得您将学到的平均信息量,请将您在第一个分支上学到的知识乘以您选择该分支的概率,再加上您在第二个分支上学到的知识乘以该分支的概率。

1/2 乘以 1 位,加上 1/2 乘以 1 位,就是 1/2 位加 1/2 位,或者总共 1 位熵。 这就是您平均可以从该决定中学到的东西。

另一方面,假设您正在一个包含 1024 个条目的表中进行线性搜索。

在第一个 == 测试中,“是”的概率为 1/1024,因此该决策中“是”的熵为

1/1024 times log(1024/1)

1/1024 * 10 = 大约 1/100 位。

因此,如果答案是“是”,您将学习 10 位,但这种可能性约为千分之一。

另一方面,“否”的可能性更大。 它的熵是

1023/1024 * log(1024/1023)

或大约 1 乘以大约零 = 大约为零。

将两者加在一起,平均而言,您将了解该决定的大约 1/100。

这就是线性搜索速度慢的原因。 每个决策的熵(您期望学习的量)太小,因为您必须学习 10 位才能找到表中的条目。

It's easy to make a big deal out of entropy. To my mind it is a pretty simple and useful concept.

Basically it quantifies what, on average, you will learn from an event, like flipping a coin, taking a branch instruction, or indexing an array.

Like a comparison operation in the middle of a search algorithm has a certain probability P of taking one branch, and 1-P of taking the other.

Suppose P is 1/2, as it is in a binary search. Then if you take that branch, you know 1 bit more than you did before, because log(2/1), base 2, is 1. On the other hand, if you take the other branch you also learn 1 bit.

To get the average amount of information you will learn, multiply what you learn on the first branch times the probability you take that branch, plus what you learn on the second branch times the probability of that branch.

1/2 times 1 bit, plus 1/2 times 1 bit, is 1/2 bit plus 1/2 bit, or total 1 bit of entropy. That's what you can expect to learn on average from that decision.

On the other hand, suppose you are doing linear search in a table of 1024 entries.

On the first == test, the probability of YES is 1/1024, so the entropy of YES at that decision is

1/1024 times log(1024/1)

or 1/1024 * 10 = about 1/100 bit.

So if the answer is YES, you learn 10 bits, but the chance of that is about 1 in a thousand.

On the other hand, NO is much more likely. It's entropy is

1023/1024 * log(1024/1023)

or roughly 1 times roughly zero = about zero.

Add the two together, and on average you will learn about 1/100 of a bit on that decision.

That's why linear search is slow. The entropy (how much you can expect to learn) at each decision is too small, since you're going to have to learn 10 bits to find the entry in the table.

神经大条 2024-07-19 23:25:23

计算机科学中的熵通常指的是一串位的随机程度。
以下问题是为了使其精确:

如何我要计算一个位串的近似熵吗?

Entropy in computer science commonly refers to how random a string of bits is.
The following question is about making that precise:

How do I compute the approximate entropy of a bit string?

宛菡 2024-07-19 23:25:23

简单来说,如果知道该语言中符号的概率,就可以计算出该语言中符号的平均信息内容。

或者

语言的熵是该语言中平均符号的信息内容的度量

考虑一枚公平的硬币;

有两个符号,每个符号的概率为 1/2
因此熵的计算公式为

h =-(1/2*log1/2 +1/2*log1/2)=1

In simple words if you know the probabilities of symbols in the langauge ,one can compute the average information content of symbol in the language .

Or

The entropy of a language is a measure of the information content of an average symbol in the language

Consider a fair coin ;

There are two symbols ,each with probability 1/2
so entropy is calculated as

h =-(1/2*log1/2 +1/2*log1/2)=1

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文