问题是词典。密钥不能列表。
您需要为整数和字符串提供单独的键。
to_replace = {'1': 'remote', 1: 'remote', '0': 'in_lab', 0: 'in_lab'}
ort_raw['Location'] = ort_raw['Location'].replace(to_replace)
首先安装openssl
brew install openssl
然后
export PKG_CONFIG_PATH="/usr/local/opt/openssl@3/lib/pkgconfig"
export CPPFLAGS="-I/usr/local/opt/openssl@3/include"
export LDFLAGS="-L/usr/local/opt/openssl@3/lib"
例外消失
pip3 install pycurl
DEPRECATION: Configuring installation scheme with distutils config files is deprecated and will no longer work in the near future. If you are using a Homebrew or Linuxbrew Python, please see discussion at https://github.com/Homebrew/homebrew-core/issues/76621
Collecting pycurl
Using cached pycurl-7.45.2.tar.gz (234 kB)
Preparing metadata (setup.py) ... done
Building wheels for collected packages: pycurl
Building wheel for pycurl (setup.py) ... done
Created wheel for pycurl: filename=pycurl-7.45.2-cp39-cp39-macosx_10_13_x86_64.whl size=144557 sha256=2c56c17f7987b8739333b8078c6d8b61ed5c8e5239ea182992bbe0eb64208970
Stored in directory: /Users/xlla/Library/Caches/pip/wheels/23/b0/37/d2c3211ee738adfb8ec6b6e10aa00e78ebc4de363f862a12c5
Successfully built pycurl
Installing collected packages: pycurl
Successfully installed pycurl-7.45.2
您将错误的元素包装在“过滤器:
arr[key].count = pageviews.filter(ua => ua === '"' + el.ua + '"').length;
应该是:
arr[key].count = pageviews.filter(ua => '"' + ua + '"' === el.ua).length;
您可以使用:
from enum import Enum
class Ranks(Enum):
BEGINNER = (0, {'will': 1, 'health': 1})
MID = (1, {'will': 2, 'health': 2})
ADVANCED = (2, {'will': 3, 'health': 3})
def __init__(self, v1, v2):
self.v1 = v1
self.v2 = v2
@property
def raw_abilities(self):
return self.v2
我检查了代码,问题源于您期望生成器生成尺寸的图像 - 128x3x128x128(batch_size x Channels x image_dim x image_dim)
。但是,您写了Convranspose2D操作的方式,事实并非如此。
我检查了中间层的输出,您的发电机正在生成维度的输出图像 - 128x3x80x80
,它是大小不匹配的,因为您的鉴别器的预期输入图像 - > 128x3x3x128x128x128
。
以下是发电机的Convtranpose2D操作中中间输出的形状 -
torch.Size([128, 4096, 4, 4])
torch.Size([128, 2048, 5, 5])
torch.Size([128, 1024, 10, 10])
torch.Size([128, 512, 20, 20])
torch.Size([128, 256, 40, 40])
torch.Size([128, 3, 80, 80])
我建议您修改发电机的ConvtransPose2D参数,如下所示 -
class Generator(nn.Module):
def __init__(self, channels_noise, channels_img, features_g):
super(Generator, self).__init__()
self.net = nn.Sequential(
self._block(channels_noise, features_g*32, 4, 1, 0),
self._block(features_g*32, features_g*16, 4, 2, 1),
self._block(features_g*16, features_g*8, 4, 2, 1),
self._block(features_g*8, features_g*4, 4, 2, 1),
self._block(features_g*4, features_g*2, 4, 2, 1),
nn.ConvTranspose2d(features_g*2, channels_img, kernel_size=4, stride=2, padding=1),
# Output: N x channels_img x 64 x 64
nn.Tanh(),
)
def _block(self, in_channels, out_channels, kernel_size, stride, padding):
return nn.Sequential(
nn.ConvTranspose2d(
in_channels,
out_channels,
kernel_size,
stride,
padding,
bias=False,
),
#nn.BatchNorm2d(out_channels),
nn.ReLU(),
)
def forward(self, x):
return self.net(x)
这会产生> 128x3x128x128x128
。中间尺寸如下 -
torch.Size([128, 4096, 4, 4])
torch.Size([128, 2048, 8, 8])
torch.Size([128, 1024, 16, 16])
torch.Size([128, 512, 32, 32])
torch.Size([128, 256, 64, 64])
torch.Size([128, 3, 128, 128])
只需将生成器替换为此,您的代码应适用于dimension 3x128x128
的图像。
可能会更容易
library(dplyr)
rows_update(a, b, by = 'V1')
使用 rows_update
或通过与 data.table.table
进行分配(:=
), 从“ b”数据中的列('v2')的'a'
library(data.table)
setDT(a)[b, V2 := i.V2, on = .(V1)]
我会这样这样做:
CREATE TABLE aux AS
SELECT Users.user_id, COUNT(Undostres.user_id) AS count
FROM Users
LEFT OUTER JOIN Undostres USING (user_id)
GROUP BY Users.user_id;
我假设您有一个用户
列举了所有用户,无论他们是否发表了任何评论。在这种情况下,左外的连接有帮助,因为如果没有给定用户的注释,则用户仍然是结果的一部分,计数为0。
iiuc,使用 dtype = str
作为 read_table
的参数,以防止pandas推断数据类型:
df = pd.read_table('ID 00d0.txt', header=None, delim_whitespace=True, dtype=str,
names=['0','ID',"B0","B1","B2","B3","B4","B5","B6","B7"])
输出:
>>> df
0 ID B0 B1 B2 B3 B4 B5 B6 B7
0 1656033437.571007 00d0 A0 00 00 00 00 00 ff 01
1 1656033437.590747 00d0 30 00 00 00 00 00 ff 01
2 1656033437.610978 00d0 30 00 00 00 00 00 ff 01
3 1656033437.630766 00d0 30 00 00 00 00 00 ff 01
4 1656033437.650933 00d0 30 00 00 00 00 00 ff 01
.. ... ... .. .. .. .. .. .. .. ..
96 1656033439.490835 00d0 2f 00 00 00 00 00 ff 01
97 1656033439.51084 00d0 2f 00 00 00 00 00 fe 01
98 1656033439.530714 00d0 2f 00 00 00 00 00 ff 01
99 1656033439.550823 00d0 2f 00 01 00 00 00 ff 01
100 1656033439.570697 00d0 2f 00 01 00 00 00 ff 01
[101 rows x 10 columns]
您可以通过链接 .astype( {'0':float})
。
在这里打开两件事。
首先,不建议将文件存储在数据库中。最好使用存储或直接使用服务器的文件系统。
其次,通常根据您的策略(在服务器的文件系统,数据库或第三方存储)上上传和保存文件,然后如果用户未进行付款,将进行清理。您需要定义进行清理的条件,无论是因为用户上传文件并且在一定时间段内一直不活动,还是因为他们单击了特定的按钮,还是两者的组合。
要触发清理,您有不同的可能性:
- 上传文件时,安排任务,例如使用 django-q 要检查一下上传文件后1小时,如果付款尚未完成,则将文件删除
- 编写一个django命令每天都会触发cron作业触发的cron作业,从比1小时,
- 您还可以使用Django会议工作,并定期扫描1小时尚未活动的会话,他们的付款将待定,并假设这些用户将不会继续付款并删除其上传的文件
请参阅@michael的非矢量化解决方案,
d = {}; torch.tensor([d.setdefault(tuple(i.tolist()), e) for e, i in enumerate(t4)])
另一个非矢量化解决方案是
t4_list = t4.tolist(); torch.tensor(list(map(lambda x: t4_list.index(x), t4)))
至少您在设置 binding.source
时犯了一个错误。在通常的实例属性的情况下,它应该是具有该属性的对象,在您的情况下,是“ config”的实例。对于静态属性,您无需设置 binding.source
。
在阅读上面的问题时,我认为根据您的要求,您需要称呼:
这将返回具有中心属性的ObjectInstancestate
空间图表将具有您要寻找的姿势信息。
如果您对混合现实中的一般坐标系统有疑问,我建议您从这里开始了解更多信息:
坐标系统 - 混合现实
另一个好资源 在AOA上,如果您尚未查看样本,请在此处提供:
https://github.com/azure/azure/azure-azure-object-andors
也有在此处浏览此示例的教程:
In reading your question above, I think based on what you are asking, you are needing to call this:
ObjectInstance.TryGetCurrentState Method (Microsoft.Azure.ObjectAnchors) | Microsoft Docs
This will return an ObjectInstanceState with a center property that is a
SpatialGraphLocation Struct (Microsoft.Azure.ObjectAnchors.SpatialGraph) | Microsoft Docs
The SpatialGraphLocation will have the pose information that you are looking for.
If you have questions on Coordinate Systems in general in Mixed Reality, I'd suggest you start here to learn more:
Coordinate systems - Mixed Reality
Another good resources on AOA, if you haven't looked at the sample already, is available here:
https://github.com/Azure/azure-object-anchors
There is also a tutorial that walks through this sample here:
Quickstart: In-depth MRTK walkthrough - Azure Object Anchors | Microsoft Docs
如何从检测到的objectinstance中获得位置?