眼眸里的那抹悲凉

文章 评论 浏览 825

眼眸里的那抹悲凉 2025-02-20 17:50:33

尝试使用所描述的连接字符串格式在这里。如果您在域上,并且将使用客户端上的登录帐户访问SQL Server,则可以使用受信任的连接,

Server=myServerAddress;Database=myDataBase;Trusted_Connection=True;

否则您需要提供凭据。

Server=myServerAddress;Database=myDataBase;User Id=myUsername;Password=myPassword;

Try using the connection string format described here. If you are on a domain and will be using the logged-in account on the client to access the SQL server you can use a trusted connection

Server=myServerAddress;Database=myDataBase;Trusted_Connection=True;

Otherwise you'll need to provide credentials.

Server=myServerAddress;Database=myDataBase;User Id=myUsername;Password=myPassword;

如何在另一台计算机上为Microsoft SQL Server制作连接字符串

眼眸里的那抹悲凉 2025-02-20 15:53:27

您需要运行 git push origans master 以推动github远程存储库中的更改,因为您在 Master 您本地的分支

You need to run git push origin master to push the changes in your github remote repo Cause You're on master branch in your local

无法在github上推动存储库

眼眸里的那抹悲凉 2025-02-20 15:36:40

使用 nx 工作区,这将允许您管理多个存储库。
有一个名为“ libs” in工作区的文件夹,我们可以从那里添加通用模块,我们可以向所有其他存储库共享组件和服务。
阅读NX文档以获取更多信息。 https://nx.dev/getting-petting-petting-started/nx-anded/nx-and-ang-angular

Use nx workspace, Which will allow you manage multiple repositories.
There is a folder called 'libs'in workspace which we can add common module from there we can share components and services to all other repositories.
Read nx documentation for more information. https://nx.dev/getting-started/nx-and-angular

如何将角度应用从monorepo转换为polyrepo?

眼眸里的那抹悲凉 2025-02-20 14:05:18

为什么不使用行访问策略?它可能需要进行一些调整,但是您可以创建类似的行访问策略:

create or replace row access policy test_policy as (val varchar) returns boolean ->
  case
    when lower(current_statement()) like '%select%*%' 
  then false else true end;

如果查询中存在选择 *,将此策略应用于表格将不会返回任何记录。您可以将此策略应用于每个表,并且不会以任何方式影响您的架构。

Why not use a row access policy, instead? It might take some tweaking, but you could create a row access policy similar to:

create or replace row access policy test_policy as (val varchar) returns boolean ->
  case
    when lower(current_statement()) like '%select%*%' 
  then false else true end;

Applying this policy to a table would not return any records if a select * was present in the query. You could apply this policy to every table and it wouldn't affect your schema in any way.

如何防止人们在雪花桌上跑步 *?

眼眸里的那抹悲凉 2025-02-20 07:01:19

您必须有一个枚举实例,目前,您只有它的“蓝图”。创建您的位置字段(我将其重命名为位置类型)并使用它。另外,作为个人喜好,我不喜欢将我的枚举嵌入课堂上,因为您每次使用时都必须在类名称上前缀。

一个示例:

import java.util.*;
import java.lang.*;
import java.io.*;

class Profession {
    public enum PositionTypes {
        Developer, Mechanic, Director
    }

    private PositionTypes _position;

    public void setPosition(PositionTypes position) {
        _position = position;
    }

    public PositionTypes getPosition() {
        return _position;
    }
}

class Main {
    public static void main(String[] args) {
        Profession p1 = new Profession();
        System.out.println(p1.getPosition());
    
        p1.setPosition(Profession.PositionTypes.Director);
        System.out.println(p1.getPosition());
    
        if (p1.getPosition() == Profession.PositionTypes.Director)
            System.out.println("We made a check!");
    }
}

此输出:

null
Director
We made a check!

You have to have an instance of your enum, currently you only have the "blueprint" of it. Create a field of your Position (I've renamed it PositionTypes) and use that. Also, as a personal preference, I don't like having my enums embedded in classes, since you'd have to prefix the class name every time you use it.

An example:

import java.util.*;
import java.lang.*;
import java.io.*;

class Profession {
    public enum PositionTypes {
        Developer, Mechanic, Director
    }

    private PositionTypes _position;

    public void setPosition(PositionTypes position) {
        _position = position;
    }

    public PositionTypes getPosition() {
        return _position;
    }
}

class Main {
    public static void main(String[] args) {
        Profession p1 = new Profession();
        System.out.println(p1.getPosition());
    
        p1.setPosition(Profession.PositionTypes.Director);
        System.out.println(p1.getPosition());
    
        if (p1.getPosition() == Profession.PositionTypes.Director)
            System.out.println("We made a check!");
    }
}

This outputs:

null
Director
We made a check!

op -java的居民

眼眸里的那抹悲凉 2025-02-20 04:44:50

您不能将两个变量传递到 set ,但是可以传递对象(或数组)。

class Circle {

  get defaultLocation() {
    return this._defaultLocation
  }
  
  set defaultLocation(loc) {
    this._defaultLocation = loc
  }
  
  constructor(radius) {
    this.radius = radius;
    this._defaultLocation = {
        x: 0,
        y: 0
    };
  }

}

const circle = new Circle(10);

circle.defaultLocation = {
  x: 5,
  y: 6
};

You can't pass two variables to set, but you can pass an object (or an array).

class Circle {

  get defaultLocation() {
    return this._defaultLocation
  }
  
  set defaultLocation(loc) {
    this._defaultLocation = loc
  }
  
  constructor(radius) {
    this.radius = radius;
    this._defaultLocation = {
        x: 0,
        y: 0
    };
  }

}

const circle = new Circle(10);

circle.defaultLocation = {
  x: 5,
  y: 6
};

通过在JavaScript中使用setter重新定位坐标

眼眸里的那抹悲凉 2025-02-20 01:13:57

该模型的第一层期望两个通道,而不是一个。
只需将正确的输入形状传递给“摘要”如下:

summary(model, ((2, dim1),(2,dim2))

编辑:在正向函数中,我将进行串联(如果两个模型的输入都具有相同的形状):

w = torch.cat([x,y], dim=1)
w = self.flatten(w)

编辑:
这是使用正确实现的工作代码

from torch import nn
import torch.nn.functional as F
import torch

class myDNN(nn.Module):
  def __init__(self):
    super(myDNN, self).__init__()

    # layers definition

    # first convolutional block
    self.path1_conv1 = nn.Conv1d(in_channels=2, out_channels=8, kernel_size=7)
    self.path1_pool1 = nn.MaxPool1d(kernel_size = 2, stride=2)

    # second convolutional block
    self.path1_conv2 = nn.Conv1d(in_channels=8, out_channels=16, kernel_size=3)
    self.path1_pool2 = nn.MaxPool1d(kernel_size = 2, stride=2)

    # third convolutional block
    self.path1_conv3 = nn.Conv1d(in_channels=16, out_channels=32, kernel_size=3)
    self.path1_pool3 = nn.MaxPool1d(kernel_size = 2, stride=2)

    # fourth convolutional block
    self.path1_conv4 = nn.Conv1d(in_channels=32, out_channels=64, kernel_size=3)
    self.path1_pool4 = nn.MaxPool1d(kernel_size = 2, stride=2)

    # fifth convolutional block
    self.path1_conv5 = nn.Conv1d(in_channels=64, out_channels=128, kernel_size=3)
    self.path1_pool5 = nn.MaxPool1d(kernel_size = 2, stride=2)

    self.path2_conv1 = nn.Conv1d(in_channels=2, out_channels=8, kernel_size=7)
    self.path2_pool1 = nn.MaxPool1d(kernel_size = 2, stride=2)

    # second convolutional block
    self.path2_conv2 = nn.Conv1d(in_channels=8, out_channels=16, kernel_size=3)
    self.path2_pool2 = nn.MaxPool1d(kernel_size = 2, stride=2)

    # third convolutional block
    self.path2_conv3 = nn.Conv1d(in_channels=16, out_channels=32, kernel_size=3)
    self.path2_pool3 = nn.MaxPool1d(kernel_size = 2, stride=2)

    # fourth convolutional block
    self.path2_conv4 = nn.Conv1d(in_channels=32, out_channels=64, kernel_size=3)
    self.path2_pool4 = nn.MaxPool1d(kernel_size = 2, stride=2)

    # fifth convolutional block
    self.path2_conv5 = nn.Conv1d(in_channels=64, out_channels=128, kernel_size=3)
    self.path2_pool5 = nn.MaxPool1d(kernel_size = 2, stride=2)


    self.flatten = nn.Flatten()
    self.drop1 = nn.Dropout(p=0.5)
    self.fc1 = nn.Linear(in_features=2048, out_features=50) #3200 is a random number,probably wrong
    self.drop2 = nn.Dropout(p=0.5) #dropout
    self.fc2 = nn.Linear(in_features=50, out_features=25)
    self.fc3 = nn.Linear(in_features=25, out_features=2)


  def forward(self, x, y):
    x = F.relu(self.path1_conv1(x))
    x = self.path1_pool1(x)
    x = F.relu(self.path1_conv2(x))
    x = self.path1_pool2(x)
    x = F.relu(self.path1_conv3(x))
    x = self.path1_pool3(x)
    x = F.relu(self.path1_conv4(x))
    x = self.path1_pool3(x)
    x = F.relu(self.path1_conv5(x))
    x = self.path1_pool5(x)

    y = F.relu(self.path2_conv1(y))
    y = self.path2_pool1(y)
    y = F.relu(self.path2_conv2(y))
    y = self.path2_pool2(y)
    y = F.relu(self.path2_conv3(y))
    y = self.path2_pool3(y)
    y = F.relu(self.path2_conv4(y))
    y = self.path2_pool3(y)
    y = F.relu(self.path2_conv5(y))
    y = self.path2_pool5(y)

    #flatten
    x = self.flatten(x)
    y = self.flatten(y)

    w = torch.cat([x,y],dim=1)
    print(w.shape)
    w = self.drop1(w) #dropout layer
    w = F.relu(self.fc1(w)) #layer fully connected with re lu
    w = self.drop2(w)
    w = F.relu(self.fc2(w)) #layer fully connected with re lu

    w = self.fc3(w) #layer fully connected
    out = F.log_softmax(w, dim=1)

    return out

def main():
    model = myDNN()
    print(model)
    from torchsummary import summary
    if torch.cuda.is_available():
        summary(model.cuda(), input_size = [(2,246),(2,447)])
    else:
        summary(model, input_size = [(2,246),(2,447)])
if __name__ == '__main__':
    main()

The first layer of the model expects two channels rather than one.
Simply pass the correct input shape to "summary" as follows:

summary(model, ((2, dim1),(2,dim2))

Edit: In the forward function I would do the concatenation as follows (if both model's inputs have the same shape):

w = torch.cat([x,y], dim=1)
w = self.flatten(w)

Edit:
Here is a working code using the correct implementation

from torch import nn
import torch.nn.functional as F
import torch

class myDNN(nn.Module):
  def __init__(self):
    super(myDNN, self).__init__()

    # layers definition

    # first convolutional block
    self.path1_conv1 = nn.Conv1d(in_channels=2, out_channels=8, kernel_size=7)
    self.path1_pool1 = nn.MaxPool1d(kernel_size = 2, stride=2)

    # second convolutional block
    self.path1_conv2 = nn.Conv1d(in_channels=8, out_channels=16, kernel_size=3)
    self.path1_pool2 = nn.MaxPool1d(kernel_size = 2, stride=2)

    # third convolutional block
    self.path1_conv3 = nn.Conv1d(in_channels=16, out_channels=32, kernel_size=3)
    self.path1_pool3 = nn.MaxPool1d(kernel_size = 2, stride=2)

    # fourth convolutional block
    self.path1_conv4 = nn.Conv1d(in_channels=32, out_channels=64, kernel_size=3)
    self.path1_pool4 = nn.MaxPool1d(kernel_size = 2, stride=2)

    # fifth convolutional block
    self.path1_conv5 = nn.Conv1d(in_channels=64, out_channels=128, kernel_size=3)
    self.path1_pool5 = nn.MaxPool1d(kernel_size = 2, stride=2)

    self.path2_conv1 = nn.Conv1d(in_channels=2, out_channels=8, kernel_size=7)
    self.path2_pool1 = nn.MaxPool1d(kernel_size = 2, stride=2)

    # second convolutional block
    self.path2_conv2 = nn.Conv1d(in_channels=8, out_channels=16, kernel_size=3)
    self.path2_pool2 = nn.MaxPool1d(kernel_size = 2, stride=2)

    # third convolutional block
    self.path2_conv3 = nn.Conv1d(in_channels=16, out_channels=32, kernel_size=3)
    self.path2_pool3 = nn.MaxPool1d(kernel_size = 2, stride=2)

    # fourth convolutional block
    self.path2_conv4 = nn.Conv1d(in_channels=32, out_channels=64, kernel_size=3)
    self.path2_pool4 = nn.MaxPool1d(kernel_size = 2, stride=2)

    # fifth convolutional block
    self.path2_conv5 = nn.Conv1d(in_channels=64, out_channels=128, kernel_size=3)
    self.path2_pool5 = nn.MaxPool1d(kernel_size = 2, stride=2)


    self.flatten = nn.Flatten()
    self.drop1 = nn.Dropout(p=0.5)
    self.fc1 = nn.Linear(in_features=2048, out_features=50) #3200 is a random number,probably wrong
    self.drop2 = nn.Dropout(p=0.5) #dropout
    self.fc2 = nn.Linear(in_features=50, out_features=25)
    self.fc3 = nn.Linear(in_features=25, out_features=2)


  def forward(self, x, y):
    x = F.relu(self.path1_conv1(x))
    x = self.path1_pool1(x)
    x = F.relu(self.path1_conv2(x))
    x = self.path1_pool2(x)
    x = F.relu(self.path1_conv3(x))
    x = self.path1_pool3(x)
    x = F.relu(self.path1_conv4(x))
    x = self.path1_pool3(x)
    x = F.relu(self.path1_conv5(x))
    x = self.path1_pool5(x)

    y = F.relu(self.path2_conv1(y))
    y = self.path2_pool1(y)
    y = F.relu(self.path2_conv2(y))
    y = self.path2_pool2(y)
    y = F.relu(self.path2_conv3(y))
    y = self.path2_pool3(y)
    y = F.relu(self.path2_conv4(y))
    y = self.path2_pool3(y)
    y = F.relu(self.path2_conv5(y))
    y = self.path2_pool5(y)

    #flatten
    x = self.flatten(x)
    y = self.flatten(y)

    w = torch.cat([x,y],dim=1)
    print(w.shape)
    w = self.drop1(w) #dropout layer
    w = F.relu(self.fc1(w)) #layer fully connected with re lu
    w = self.drop2(w)
    w = F.relu(self.fc2(w)) #layer fully connected with re lu

    w = self.fc3(w) #layer fully connected
    out = F.log_softmax(w, dim=1)

    return out

def main():
    model = myDNN()
    print(model)
    from torchsummary import summary
    if torch.cuda.is_available():
        summary(model.cuda(), input_size = [(2,246),(2,447)])
    else:
        summary(model, input_size = [(2,246),(2,447)])
if __name__ == '__main__':
    main()

如何通过Pytorch摘要解决此错误?

眼眸里的那抹悲凉 2025-02-19 19:57:50

如果您需要对案例不敏感的检查,请使用此功能。

(?i)\\b(\\w+)\\s+\\1\\b

Use this in case you want case-insensitive checking for duplicate words.

(?i)\\b(\\w+)\\s+\\1\\b

重复单词的正则表达式

眼眸里的那抹悲凉 2025-02-19 12:43:44

这就是要做的。

${folder}: 
        mkdir -p ${folder}

this is what make is for.

${folder}: 
        mkdir -p ${folder}

如果不存在,请在makefile中创建一个文件夹 - 错误

眼眸里的那抹悲凉 2025-02-19 12:04:59

复制并粘贴公式:

也许您可以从“ jquery.sheet”中复制并粘贴所需的公式。移至:

https://github.com/spreadsheets/wickedgrid

看起来都是“开源”

也不会解决该问题

:“启用脚本使用标准电子表格函数”的问题被标记为“不会修复”,请参见 https:// https:/// code.google.com/p/google-apps-script-issues/issues/detail?id=26

ethercalc
像OpenSource电子表格一样,有一个名为Ethercalc

GUI代码的Google:
https://github.com/audreyt/ethercalc

公式: https://github.com/marcelklehr/socialcalc

demo-在Sandstorm上:
https://apps.sandstorm.io/app/a0n6hwm32zjsrzes8gnjg734dh6jwt7x83xdgytspe761pe2asw0

Copy and paste the formulas:

Maybe you can copy and paste the formulas you need from "jQuery.sheet". Moved to:

https://github.com/Spreadsheets/WickedGrid

Looks to be all "open source"

Wont fix the issue

Also: The issue "Enable scripts to use standard spreadsheet functions" is marked as "Wont fix", see https://code.google.com/p/google-apps-script-issues/issues/detail?id=26

Ethercalc
there is a google like opensource spreadsheet called Ethercalc

GUI Code:
https://github.com/audreyt/ethercalc

Formulas: https://github.com/marcelklehr/socialcalc

Demo - on sandstorm:
https://apps.sandstorm.io/app/a0n6hwm32zjsrzes8gnjg734dh6jwt7x83xdgytspe761pe2asw0

有没有办法评估存储在单元格中的公式?

眼眸里的那抹悲凉 2025-02-19 07:23:40

总结我的评论:

您可能需要将新废品中的预测数据附加到同一数据库表中的现有数据。

从每个新的Web绑架中,您将获得大约。 40新记录具有相同的报废时间戳记,但预测时间戳不同。

例如,这将是使用 id 作为主键的 autoincrement

id scrapping_time forecast_hours wind_speed wind_gusts wind_direction wind_direwave wave wave wave wave


如果使用sqlite,则可以将 id 列删除,因为SQLITE会添加此类 ROWID 列,如果未指定其他主键
https://www.sqlite.org/autoinc.org/autoinc.html

Summarizing my comments:

You may want to append forecast data from a new scrapping to the existing data in the same database table.

From each new web-scrapping you will get approx. 40 new records with the same scrapping time stamp but different forecast time stamp.

e.g., this would be the columns of the table using ID as primary key with AUTOINCREMENT:

ID Scrapping_time Forecast_hours Wind_speed Wind_gusts Wind_direction Wave Wave_period wave_direction

Note:
if you use SQLite, you could leave out the ID column as SQLite would add such ROWID column by default if no other primary key had been specified
(https://www.sqlite.org/autoinc.html)

使用Python存储预测时间序列数据的方法

眼眸里的那抹悲凉 2025-02-18 20:23:30

不幸的是,MySQL没有枢轴,这是给出预期结果所需的第一步,因此您必须使用:

 select StudentID,
        max(case when ColumnSequence=1 then Major end) Major1,
        max(case when ColumnSequence=2 then Major end) Major2
 from (
        Select StudentID, 
               Major,
               row_number() over(partition by StudentID order by StudentID) ColumnSequence
        from tbl ) as t1
 group by StudentID

如果您每个studentID有超过两个值,则上述查询将行转换为列(column sequence时,case depteriD添加了两个以上的情况) = 2然后大端)Major3。

然后使用以下查询来查找对不止一次。
注释。我添加了以下值(201,“科学”),(201,“数学”),以不止一次计数。

使用:

SELECT CONCAT(LEAST(Major1, Major2), ',', GREATEST(Major1, Major2)) as pair, 
       COUNT(*) as unique_pair_repeats
FROM  ( select StudentID,
               max(case when ColumnSequence=1 then Major end) Major1,
               max(case when ColumnSequence=2 then Major end) Major2
        from ( Select StudentID, 
                      Major,
                      row_number() over(partition by StudentID order by StudentID) ColumnSequence
               from tbl 
             ) as t1
       group by StudentID 
       ) as t2 
GROUP BY pair
HAVING count(*) > 1;

检查演示更多信息:

Unfortunately MySQL doesn't have PIVOT , which is the first step needed to give the expected result so you have to use:

 select StudentID,
        max(case when ColumnSequence=1 then Major end) Major1,
        max(case when ColumnSequence=2 then Major end) Major2
 from (
        Select StudentID, 
               Major,
               row_number() over(partition by StudentID order by StudentID) ColumnSequence
        from tbl ) as t1
 group by StudentID

Above query will convert rows to columns , if you have more than two values per StudentID add another condition max(case when ColumnSequence=2 then Major end) Major3.

Then use below query to find pairs more than once.
Note. I added the following values (201,'Science'), (201,'Maths') to get a count more than once.

Use:

SELECT CONCAT(LEAST(Major1, Major2), ',', GREATEST(Major1, Major2)) as pair, 
       COUNT(*) as unique_pair_repeats
FROM  ( select StudentID,
               max(case when ColumnSequence=1 then Major end) Major1,
               max(case when ColumnSequence=2 then Major end) Major2
        from ( Select StudentID, 
                      Major,
                      row_number() over(partition by StudentID order by StudentID) ColumnSequence
               from tbl 
             ) as t1
       group by StudentID 
       ) as t2 
GROUP BY pair
HAVING count(*) > 1;

Check the demo for more info: https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=acf26229e885e5e9bd02a90f7c7b057f

自我加入MySQL中独特的计数

眼眸里的那抹悲凉 2025-02-18 16:59:18

尝试安装EMMET语言服务器。

与梅森一起,

:MasonInstall emmet-ls

Astro LSP对我也没有正确的操作。

Try installing the emmet language server.

With Mason

:MasonInstall emmet-ls

The Astro LSP didn't do emmet correctly for me either.

如何在.ASTRO中启用Emmet?

眼眸里的那抹悲凉 2025-02-18 15:33:51

cdk部署 a> cloudAsseMembly .out 每次部署前。缓存无济于事。

但是,CDK显然缓存zipped式伪像(上传到S3之前),因此从理论上讲,您可以通过caching cdk.out/.cache/.cache 保存 .zip -ing时间。

cdk deploy synthesizes the CloudAssembly artifacts into cdk.out each time before deploying. Caching wouldn't help there.

However, the CDK apparently caches zipped artifacts (before uploading to S3), so in theory you could save .zip-ing time by caching cdk.out/.cache.

AWS CDK并缓存CDK.Out目录中的构建管道目录

眼眸里的那抹悲凉 2025-02-18 10:49:39

@codingmytra tanks tank tock to your评论,我找到了解决方案。

通过添加这些 useRequestInterceptor useresponseinterceptor 选项, accessToken refreshtoken 变量自动自动更新自己。

app.UseSwaggerUI(swaggerUiOptions =>
            {
                var responseInterceptor = @"(res) => 
                {
                    if(res.obj.accessToken)
                    { 
                        console.log(res.obj.accessToken);
                        const token = res.obj.accessToken;
                        localStorage.setItem('token', token);
                    };
                    if(res.obj.refreshToken)
                    { 
                        console.log(res.obj.refreshToken); 
                        const refresh_token = res.obj.refreshToken; 
                        localStorage.setItem('refresh_token', refresh_token); 
                    }; 
                    return res; 
                }";
                    var requestInterceptor = @"(req) => 
                { 
                    req.headers['Authorization'] = 'Bearer ' + localStorage.getItem('token');
                    req.headers['RefreshToken'] = localStorage.getItem('refresh_token');
                    return req; 
                }";
                swaggerUiOptions.UseResponseInterceptor(Regex.Replace(responseInterceptor, @"\s+", " "));
                swaggerUiOptions.UseRequestInterceptor(Regex.Replace(requestInterceptor, @"\s+", " "));
            });

@CodingMytra tanks to your comment I found a solution.

By adding these UseRequestInterceptor and UseResponseInterceptor options, accessToken and refreshToken variables automatically updates themselves.

app.UseSwaggerUI(swaggerUiOptions =>
            {
                var responseInterceptor = @"(res) => 
                {
                    if(res.obj.accessToken)
                    { 
                        console.log(res.obj.accessToken);
                        const token = res.obj.accessToken;
                        localStorage.setItem('token', token);
                    };
                    if(res.obj.refreshToken)
                    { 
                        console.log(res.obj.refreshToken); 
                        const refresh_token = res.obj.refreshToken; 
                        localStorage.setItem('refresh_token', refresh_token); 
                    }; 
                    return res; 
                }";
                    var requestInterceptor = @"(req) => 
                { 
                    req.headers['Authorization'] = 'Bearer ' + localStorage.getItem('token');
                    req.headers['RefreshToken'] = localStorage.getItem('refresh_token');
                    return req; 
                }";
                swaggerUiOptions.UseResponseInterceptor(Regex.Replace(responseInterceptor, @"\s+", " "));
                swaggerUiOptions.UseRequestInterceptor(Regex.Replace(requestInterceptor, @"\s+", " "));
            });

如何在Swagger UI中使用自动变量?

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

更多

友情链接

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文