洋洋洒洒

文章 评论 浏览 29

洋洋洒洒 2025-02-20 20:35:33

假设您搜索具有多对数组值的函数(如您所描述)
每个结果都应是格式:key1 [sp] val1,[sp] key2 [sp] val2,您希望所有这些值的数组以后使用我执行此功能:

    <?php
    
    function ar(){
        $a=func_get_args();
        foreach($a as $ar){
            $s='';
            $i=0;
            $s='';
            foreach($ar as $ch =>$vl){
                $s.=$ch.' '.$vl;
                if($i<count($ar)-1){
                    $s.=', ';
                }
                $i++;
            }
            $res[]=$s;
        }
        return $res;
    }
    

/* output values by sending multiple arrays to parse */
    var_dump(ar(
        [13 => 500,16=> 1000]
        ,[12 => 1,13 => 1111]
    ));
    ?>

assuming you searching for a function with multiple pair array values (as you describe)
and each result should be the format: key1[sp]val1,[sp]key2[sp]val2 and you want an array of all these values to use later i did this function:

    <?php
    
    function ar(){
        $a=func_get_args();
        foreach($a as $ar){
            $s='';
            $i=0;
            $s='';
            foreach($ar as $ch =>$vl){
                $s.=$ch.' '.$vl;
                if($i<count($ar)-1){
                    $s.=', ';
                }
                $i++;
            }
            $res[]=$s;
        }
        return $res;
    }
    

/* output values by sending multiple arrays to parse */
    var_dump(ar(
        [13 => 500,16=> 1000]
        ,[12 => 1,13 => 1111]
    ));
    ?>

带有钥匙的阵列定制字符串PHP

洋洋洒洒 2025-02-20 04:48:43

您可以使用 Reled Object.entries 获得与以下方式相同的结果

const data = [
  {
    "Area": "Werk Produktivität [%] - Target",
    "Jan": 86.21397507374327,
    "Feb": 86.0570021973368,
    "Mrz": 88.70898346258058,
    "Apr": 85.29801908413164,
    "May": 85.07431241640211
  },
  {
    "Area": "Werk Produktivität [%] - Actual",
    "Jan": 84.17054711398421,
    "Feb": 83.80826026601528,
    "Mrz": 84.11553769971036,
    "Apr": 83.76460916731,
    "May": 82.69773876702813
  }
]
const result  = data.reduce((res, {Area, ...rest}) => {
  Object.entries(rest).forEach(([key, value]) => res.push({Area, [key]: value}))
  return res
}, [])
console.log(result)

you can use reduce and Object.entries for getting result same as :

const data = [
  {
    "Area": "Werk Produktivität [%] - Target",
    "Jan": 86.21397507374327,
    "Feb": 86.0570021973368,
    "Mrz": 88.70898346258058,
    "Apr": 85.29801908413164,
    "May": 85.07431241640211
  },
  {
    "Area": "Werk Produktivität [%] - Actual",
    "Jan": 84.17054711398421,
    "Feb": 83.80826026601528,
    "Mrz": 84.11553769971036,
    "Apr": 83.76460916731,
    "May": 82.69773876702813
  }
]
const result  = data.reduce((res, {Area, ...rest}) => {
  Object.entries(rest).forEach(([key, value]) => res.push({Area, [key]: value}))
  return res
}, [])
console.log(result)

javaScript中数组的破坏对象

洋洋洒洒 2025-02-20 03:56:30

在此处:

@Composable
fun TransformableDemo() {
    var scale by remember { mutableStateOf(1f) }
    var rotation by remember { mutableStateOf(0f) }
    var offset by remember { mutableStateOf(Offset.Zero) }
    val state = rememberTransformableState { 
        zoomChange, offsetChange, rotationChange ->
            scale *= zoomChange
            rotation += rotationChange
            offset += offsetChange
    }

    Box(
        modifier = Modifier
            .graphicsLayer(
                scaleX = scale,
                scaleY = scale,
                rotationZ = rotation,
                translationX = offset.x,
                translationY = offset.y
            )
            .transformable(state = state)
            .background(Color.Blue)
            .fillMaxSize()
    )
}

From example here: Android Touch System Gesture-Handling Modifiers in Jetpack Compose

@Composable
fun TransformableDemo() {
    var scale by remember { mutableStateOf(1f) }
    var rotation by remember { mutableStateOf(0f) }
    var offset by remember { mutableStateOf(Offset.Zero) }
    val state = rememberTransformableState { 
        zoomChange, offsetChange, rotationChange ->
            scale *= zoomChange
            rotation += rotationChange
            offset += offsetChange
    }

    Box(
        modifier = Modifier
            .graphicsLayer(
                scaleX = scale,
                scaleY = scale,
                rotationZ = rotation,
                translationX = offset.x,
                translationY = offset.y
            )
            .transformable(state = state)
            .background(Color.Blue)
            .fillMaxSize()
    )
}

Android构成如何通过拖动边框缩放和旋转图像?

洋洋洒洒 2025-02-19 14:22:24

gnu awk 5.0.1,api:2.0(gnu mpfr 4.0.2,gnu mp 6.2.0)

您可以尝试尝试其他awk的其他实现,2009年 don 't mawk尴尬 - 最快,最优雅的大数据夸张语言! nawk GAWK MAWK NAWK 更快。如果使用其他实现给出明显的提升,则需要使用数据进行测试以查找。

2022年可用的版本可能会产生不同的结果

GNU Awk 5.0.1, API: 2.0 (GNU MPFR 4.0.2, GNU MP 6.2.0)

You might give a try other implementation of AWK, according to test done in 2009¹ Don’t MAWK AWK – the fastest and most elegant big data munging language! nawk was found faster than gawk and mawk was found faster than nawk. You would need to run test with your data to find if using other implementation give noticeable boost.

¹so versions available in 2022 might give different result

更快的用于特定应用的bash“切割”的置换式替换

洋洋洒洒 2025-02-18 21:33:38

如果执行此命令,您将获取软件包中所有文件的列表:

rpm -ql mariaddb....

If you execute this command you will get list of all files in the package:

rpm -ql mariaddb....

我想知道Mariadb在CentOS中安装了哪里

洋洋洒洒 2025-02-18 07:47:53

我遇到了一个类似的问题,原因是,您将TensorFlow环境用作 Pydriver 的参数来收集数据。 TensorFlow环境为其产生的所有张量添加了批处理

尺寸 time_step 的效果将有一个额外的维度,并且与代理的火车功能所期望的数据不兼容,因此错误。

您需要在此处使用Python环境,以便以正确的维度收集数据。另外,现在您不必使用 batch_time_steps = false

我不确定如何使用TensorFlow环境使用正确的尺寸收集数据,因此我对您的代码进行了一些修改以允许使用Python环境收集数据,并且现在应该运行。

PS-您发布的代码中有一些微不足道的错误(例如,使用 log_interval 而不是 self.log_interval etc)。

代理类
`

    from __future__ import absolute_import
    from __future__ import division
    from __future__ import print_function

    import numpy as np
    import random
    from IPython.display import clear_output
    import time



    import abc
    import tensorflow as tf
    import numpy as np

    from tf_agents.environments import py_environment
    from tf_agents.environments import tf_environment
    from tf_agents.environments import tf_py_environment
    from tf_agents.environments import utils
    from tf_agents.specs import array_spec
    from tf_agents.environments import wrappers
    from tf_agents.environments import suite_gym
    from tf_agents.trajectories import time_step as ts


    class cGame(py_environment.PyEnvironment):
        def __init__(self):
            self.xdim = 21
            self.ydim = 21
            self.mmap = np.array([[0] * self.xdim] * self.ydim)
            self._turnNumber = 0
            self.playerPos = {"x": 1, "y": 1}
            self.totalScore = 0
            self.reward = 0.0
            self.input = 0
            self.addRewardEveryNTurns = 4
            self.addBombEveryNTurns = 3
            self._episode_ended = False

            ## player = 13
            ## bomb   = 14

            self._action_spec = array_spec.BoundedArraySpec(shape=(),
                                                            dtype=np.int32,
                                                            minimum=0, maximum=3,
                                                            name='action')
            self._observation_spec = array_spec.BoundedArraySpec(shape=(441,),
                                                                 minimum=np.array(
                                                                     [-1] * 441),
                                                                 maximum=np.array(
                                                                     [20] * 441),
                                                                 dtype=np.int32,
                                                                 name='observation')  # (self.xdim, self.ydim)  , self.mmap.shape,  minimum = -1, maximum = 10

        def action_spec(self):
            return self._action_spec

        def observation_spec(self):
            return self._observation_spec

        def addMapReward(self):
            dx = random.randint(1, self.xdim - 2)
            dy = random.randint(1, self.ydim - 2)
            if dx != self.playerPos["x"] and dy != self.playerPos["y"]:
                self.mmap[dy][dx] = random.randint(1, 9)
            return True

        def addBombToMap(self):
            dx = random.randint(1, self.xdim - 2)
            dy = random.randint(1, self.ydim - 2)
            if dx != self.playerPos["x"] and dy != self.playerPos["y"]:
                self.mmap[dy][dx] = 14
            return True

        def _reset(self):
            self.mmap = np.array([[0] * self.xdim] * self.ydim)
            for y in range(self.ydim):
                self.mmap[y][0] = -1
                self.mmap[y][self.ydim - 1] = -1
            for x in range(self.xdim):
                self.mmap[0][x] = -1
                self.mmap[self.ydim - 1][x] = -1

            self.playerPos["x"] = random.randint(1, self.xdim - 2)
            self.playerPos["y"] = random.randint(1, self.ydim - 2)
            self.mmap[self.playerPos["y"]][self.playerPos["x"]] = 13

            for z in range(10):
                ## place 10 targets
                self.addMapReward()
            for z in range(5):
                ## place 5 bombs
                ## bomb   = 14
                self.addBombToMap()
            self._turnNumber = 0
            self._episode_ended = False
            # return ts.restart (self.mmap)
            dap = ts.restart(np.array(self.mmap, dtype=np.int32).flatten())
            return (dap)

        def render(self, mapToRender):
            mapToRender.reshape(21, 21)
            for y in range(self.ydim):
                o = ""
                for x in range(self.xdim):
                    if mapToRender[y][x] == -1:
                        o = o + "#"
                    elif mapToRender[y][x] > 0 and mapToRender[y][x] < 10:
                        o = o + str(mapToRender[y][x])
                    elif mapToRender[y][x] == 13:
                        o = o + "@"
                    elif mapToRender[y][x] == 14:
                        o = o + "*"
                    else:
                        o = o + " "
                print(o)
            print('TOTAL SCORE:', self.totalScore, 'LAST TURN SCORE:', self.reward)
            return True

        def getInput(self):
            self.input = 0
            i = input()
            if i == 'w' or i == '0':
                print('going N')
                self.input = 1
            if i == 's' or i == '1':
                print('going S')
                self.input = 2
            if i == 'a' or i == '2':
                print('going W')
                self.input = 3
            if i == 'd' or i == '3':
                print('going E')
                self.input = 4
            if i == 'x':
                self.input = 5
            return self.input

        def processMove(self):

            self.mmap[self.playerPos["y"]][self.playerPos["x"]] = 0
            self.reward = 0
            if self.input == 0:
                self.playerPos["y"] -= 1
            if self.input == 1:
                self.playerPos["y"] += 1
            if self.input == 2:
                self.playerPos["x"] -= 1
            if self.input == 3:
                self.playerPos["x"] += 1

            cloc = self.mmap[self.playerPos["y"]][self.playerPos["x"]]

            if cloc == -1 or cloc == 14:
                self.totalScore = 0
                self.reward = -99

            if cloc > 0 and cloc < 10:
                self.totalScore += cloc
                self.reward = cloc
                self.mmap[self.playerPos["y"]][self.playerPos["x"]] = 0

            self.mmap[self.playerPos["y"]][self.playerPos["x"]] = 13

            self.render(self.mmap)

        def runTurn(self):
            clear_output(wait=True)
            if self._turnNumber % self.addRewardEveryNTurns == 0:
                self.addMapReward()
            if self._turnNumber % self.addBombEveryNTurns == 0:
                self.addBombToMap()

            self.getInput()
            self.processMove()
            self._turnNumber += 1
            if self.reward == -99:
                self._turnNumber += 1
                self._reset()
                self.totalScore = 0
                self.render(self.mmap)
            return (self.reward)

        def _step(self, action):

            if self._episode_ended == True:
                return self._reset()

            clear_output(wait=True)
            if self._turnNumber % self.addRewardEveryNTurns == 0:
                self.addMapReward()
            if self._turnNumber % self.addBombEveryNTurns == 0:
                self.addBombToMap()

            ## make sure action does produce exceed range
            # if action > 5 or action <1:
            #    action =0
            self.input = action  ## value 1 to 4
            self.processMove()
            self._turnNumber += 1

            if self.reward == -99:
                self._turnNumber += 1
                self._episode_ended = True
                # self._reset()
                self.totalScore = 0
                self.render(self.mmap)
                return ts.termination(np.array(self.mmap, dtype=np.int32).flatten(),
                                      reward=self.reward)
            else:
                return ts.transition(np.array(self.mmap, dtype=np.int32).flatten(),
                                     reward=self.reward)  # , discount = 1.0

        def run(self):
            self._reset()
            self.render(self.mmap)
            while (True):
                self.runTurn()
                if self.input == 5:
                    return ("EXIT on input x ")


    env = cGame()

`

驱动代码
``

    from tf_agents.specs import tensor_spec
    from tf_agents.networks import sequential
    from tf_agents.agents.dqn import dqn_agent
    from tf_agents.utils import common
    from tf_agents.policies import py_tf_eager_policy
    from tf_agents.policies import random_tf_policy
    import reverb
    from tf_agents.replay_buffers import reverb_replay_buffer
    from tf_agents.replay_buffers import reverb_utils
    from tf_agents.trajectories import trajectory
    from tf_agents.drivers import py_driver
    from tf_agents.environments import BatchedPyEnvironment
    
    
    class mTrainer:
        def __init__(self):
    
            self.returns = None
            self.train_env = tf_py_environment.TFPyEnvironment(cGame())
            self.eval_env = tf_py_environment.TFPyEnvironment(cGame())
    
            self.num_iterations = 20000  # @param {type:"integer"}
            self.initial_collect_steps = 100  # @param {type:"integer"}
            self.collect_steps_per_iteration = 100  # @param {type:"integer"}
            self.replay_buffer_max_length = 100000  # @param {type:"integer"}
            self.batch_size = 64  # @param {type:"integer"}
            self.learning_rate = 1e-3  # @param {type:"number"}
            self.log_interval = 200  # @param {type:"integer"}
            self.num_eval_episodes = 10  # @param {type:"integer"}
            self.eval_interval = 1000  # @param {type:"integer"}
    
        def createAgent(self):
            fc_layer_params = (100, 50)
            action_tensor_spec = tensor_spec.from_spec(self.train_env.action_spec())
            num_actions = action_tensor_spec.maximum - action_tensor_spec.minimum + 1
    
            def dense_layer(num_units):
                return tf.keras.layers.Dense(
                    num_units,
                    activation=tf.keras.activations.relu,
                    kernel_initializer=tf.keras.initializers.VarianceScaling(
                        scale=2.0, mode='fan_in', distribution='truncated_normal'))
    
            dense_layers = [dense_layer(num_units) for num_units in fc_layer_params]
            q_values_layer = tf.keras.layers.Dense(
                num_actions,
                activation=None,
                kernel_initializer=tf.keras.initializers.RandomUniform(
                    minval=-0.03, maxval=0.03),
                bias_initializer=tf.keras.initializers.Constant(-0.2))
    
            self.q_net = sequential.Sequential(dense_layers + [q_values_layer])
    
            optimizer = tf.keras.optimizers.Adam(learning_rate=self.learning_rate)
            # rain_step_counter = tf.Variable(0)
    
            self.agent = dqn_agent.DqnAgent(
                time_step_spec=self.train_env.time_step_spec(),
                action_spec=self.train_env.action_spec(),
                q_network=self.q_net,
                optimizer=optimizer,
                td_errors_loss_fn=common.element_wise_squared_loss,
                train_step_counter=tf.Variable(0))
    
            self.agent.initialize()
    
            self.eval_policy = self.agent.policy
            self.collect_policy = self.agent.collect_policy
            self.random_policy = random_tf_policy.RandomTFPolicy(
                self.train_env.time_step_spec(), self.train_env.action_spec())
            return True
    
        def compute_avg_return(self, environment, policy, num_episodes=10):
            # mT.compute_avg_return(mT.eval_env, mT.random_policy, 50)
            total_return = 0.0
            for _ in range(num_episodes):
                time_step = environment.reset()
                episode_return = 0.0
                while not time_step.is_last():
                    action_step = policy.action(time_step)
                    time_step = environment.step(action_step.action)
                    episode_return += time_step.reward
                total_return += episode_return
            avg_return = total_return / num_episodes
            print('average return :', avg_return.numpy()[0])
            return avg_return.numpy()[0]
    
        def create_replaybuffer(self):
    
            table_name = 'uniform_table'
            replay_buffer_signature = tensor_spec.from_spec(
                self.agent.collect_data_spec)
            replay_buffer_signature = tensor_spec.add_outer_dim(
                replay_buffer_signature)
    
            table = reverb.Table(table_name,
                                 max_size=self.replay_buffer_max_length,
                                 sampler=reverb.selectors.Uniform(),
                                 remover=reverb.selectors.Fifo(),
                                 rate_limiter=reverb.rate_limiters.MinSize(1),
                                 signature=replay_buffer_signature)
    
            reverb_server = reverb.Server([table])
    
            self.replay_buffer = reverb_replay_buffer.ReverbReplayBuffer(
                self.agent.collect_data_spec,
                table_name=table_name,
                sequence_length=2,
                local_server=reverb_server)
    
            self.rb_observer = reverb_utils.ReverbAddTrajectoryObserver(
                self.replay_buffer.py_client,
                table_name,
                sequence_length=2)
    
            self.dataset = self.replay_buffer.as_dataset(num_parallel_calls=3,
                                                         sample_batch_size=self.batch_size,
                                                         num_steps=2).prefetch(3)
            self.iterator = iter(self.dataset)
    
        def testReplayBuffer(self):
            py_env = cGame()
            py_driver.PyDriver(
                py_env,
                py_tf_eager_policy.PyTFEagerPolicy(
                    self.random_policy,
                    use_tf_function=True),
                [self.rb_observer],
                max_steps=self.initial_collect_steps).run(self.train_env.reset())
    
        def trainAgent(self):
    
            self.returns = list()
            print(self.collect_policy)
            py_env = cGame()
            # Create a driver to collect experience.
            collect_driver = py_driver.PyDriver(
                py_env, # CHANGE 1
                py_tf_eager_policy.PyTFEagerPolicy(
                    self.agent.collect_policy,
                    # batch_time_steps=False, # CHANGE 2
                    use_tf_function=True),
                [self.rb_observer],
                max_steps=self.collect_steps_per_iteration)
    
            # Reset the environment.
            # time_step = self.train_env.reset()
            time_step = py_env.reset()
            for _ in range(self.num_iterations):
    
                # Collect a few steps and save to the replay buffer.
                time_step, _ = collect_driver.run(time_step)
    
                # Sample a batch of data from the buffer and update the agent's network.
                experience, unused_info = next(self.iterator)
                train_loss = self.agent.train(experience).loss
    
                step = self.agent.train_step_counter.numpy()
    
                if step % self.log_interval == 0:
                    print('step = {0}: loss = {1}'.format(step, train_loss))
    
                if step % self.eval_interval == 0:
                    avg_return = self.compute_avg_return(self.eval_env,
                                                         self.agent.policy,
                                                         self.num_eval_episodes)
                    print(
                        'step = {0}: Average Return = {1}'.format(step, avg_return))
                    self.returns.append(avg_return)
    
        def run(self):
            self.createAgent()
            # self.compute_avg_return(self.train_env,self.eval_policy)
            self.create_replaybuffer()
            # self.testReplayBuffer()
            self.trainAgent()
            return True
    
    if __name__ == '__main__':
        mT = mTrainer()
        mT.run()

``

I got stuck with a similar issue, the reason for it is that, you are using tensorflow environment as the parameter of the PyDriver to collect the data. Tensorflow environment adds a batch dimension to all the tensors that it produces, therefore, each time_step generated will have an additional dimension whose value will be 1.

Now, when you retrieve the data from the replay buffer, each of time_step will have an additional dimension and it is not compatible with the data that the train function of the agent is expecting, hence the error.

You need to use a python environment here in order to collect the data with right dimension. Also, now you don't have to use batch_time_steps = False.

I am not sure how to collect the data with right dimensions with a tensorflow environment so I have modified your code a bit to allow data collection using python environment and it should run now.

PS - There were a few trivial bugs in the code you posted (ex. using log_interval instead of self.log_interval etc).

Agent Class
`

    from __future__ import absolute_import
    from __future__ import division
    from __future__ import print_function

    import numpy as np
    import random
    from IPython.display import clear_output
    import time



    import abc
    import tensorflow as tf
    import numpy as np

    from tf_agents.environments import py_environment
    from tf_agents.environments import tf_environment
    from tf_agents.environments import tf_py_environment
    from tf_agents.environments import utils
    from tf_agents.specs import array_spec
    from tf_agents.environments import wrappers
    from tf_agents.environments import suite_gym
    from tf_agents.trajectories import time_step as ts


    class cGame(py_environment.PyEnvironment):
        def __init__(self):
            self.xdim = 21
            self.ydim = 21
            self.mmap = np.array([[0] * self.xdim] * self.ydim)
            self._turnNumber = 0
            self.playerPos = {"x": 1, "y": 1}
            self.totalScore = 0
            self.reward = 0.0
            self.input = 0
            self.addRewardEveryNTurns = 4
            self.addBombEveryNTurns = 3
            self._episode_ended = False

            ## player = 13
            ## bomb   = 14

            self._action_spec = array_spec.BoundedArraySpec(shape=(),
                                                            dtype=np.int32,
                                                            minimum=0, maximum=3,
                                                            name='action')
            self._observation_spec = array_spec.BoundedArraySpec(shape=(441,),
                                                                 minimum=np.array(
                                                                     [-1] * 441),
                                                                 maximum=np.array(
                                                                     [20] * 441),
                                                                 dtype=np.int32,
                                                                 name='observation')  # (self.xdim, self.ydim)  , self.mmap.shape,  minimum = -1, maximum = 10

        def action_spec(self):
            return self._action_spec

        def observation_spec(self):
            return self._observation_spec

        def addMapReward(self):
            dx = random.randint(1, self.xdim - 2)
            dy = random.randint(1, self.ydim - 2)
            if dx != self.playerPos["x"] and dy != self.playerPos["y"]:
                self.mmap[dy][dx] = random.randint(1, 9)
            return True

        def addBombToMap(self):
            dx = random.randint(1, self.xdim - 2)
            dy = random.randint(1, self.ydim - 2)
            if dx != self.playerPos["x"] and dy != self.playerPos["y"]:
                self.mmap[dy][dx] = 14
            return True

        def _reset(self):
            self.mmap = np.array([[0] * self.xdim] * self.ydim)
            for y in range(self.ydim):
                self.mmap[y][0] = -1
                self.mmap[y][self.ydim - 1] = -1
            for x in range(self.xdim):
                self.mmap[0][x] = -1
                self.mmap[self.ydim - 1][x] = -1

            self.playerPos["x"] = random.randint(1, self.xdim - 2)
            self.playerPos["y"] = random.randint(1, self.ydim - 2)
            self.mmap[self.playerPos["y"]][self.playerPos["x"]] = 13

            for z in range(10):
                ## place 10 targets
                self.addMapReward()
            for z in range(5):
                ## place 5 bombs
                ## bomb   = 14
                self.addBombToMap()
            self._turnNumber = 0
            self._episode_ended = False
            # return ts.restart (self.mmap)
            dap = ts.restart(np.array(self.mmap, dtype=np.int32).flatten())
            return (dap)

        def render(self, mapToRender):
            mapToRender.reshape(21, 21)
            for y in range(self.ydim):
                o = ""
                for x in range(self.xdim):
                    if mapToRender[y][x] == -1:
                        o = o + "#"
                    elif mapToRender[y][x] > 0 and mapToRender[y][x] < 10:
                        o = o + str(mapToRender[y][x])
                    elif mapToRender[y][x] == 13:
                        o = o + "@"
                    elif mapToRender[y][x] == 14:
                        o = o + "*"
                    else:
                        o = o + " "
                print(o)
            print('TOTAL SCORE:', self.totalScore, 'LAST TURN SCORE:', self.reward)
            return True

        def getInput(self):
            self.input = 0
            i = input()
            if i == 'w' or i == '0':
                print('going N')
                self.input = 1
            if i == 's' or i == '1':
                print('going S')
                self.input = 2
            if i == 'a' or i == '2':
                print('going W')
                self.input = 3
            if i == 'd' or i == '3':
                print('going E')
                self.input = 4
            if i == 'x':
                self.input = 5
            return self.input

        def processMove(self):

            self.mmap[self.playerPos["y"]][self.playerPos["x"]] = 0
            self.reward = 0
            if self.input == 0:
                self.playerPos["y"] -= 1
            if self.input == 1:
                self.playerPos["y"] += 1
            if self.input == 2:
                self.playerPos["x"] -= 1
            if self.input == 3:
                self.playerPos["x"] += 1

            cloc = self.mmap[self.playerPos["y"]][self.playerPos["x"]]

            if cloc == -1 or cloc == 14:
                self.totalScore = 0
                self.reward = -99

            if cloc > 0 and cloc < 10:
                self.totalScore += cloc
                self.reward = cloc
                self.mmap[self.playerPos["y"]][self.playerPos["x"]] = 0

            self.mmap[self.playerPos["y"]][self.playerPos["x"]] = 13

            self.render(self.mmap)

        def runTurn(self):
            clear_output(wait=True)
            if self._turnNumber % self.addRewardEveryNTurns == 0:
                self.addMapReward()
            if self._turnNumber % self.addBombEveryNTurns == 0:
                self.addBombToMap()

            self.getInput()
            self.processMove()
            self._turnNumber += 1
            if self.reward == -99:
                self._turnNumber += 1
                self._reset()
                self.totalScore = 0
                self.render(self.mmap)
            return (self.reward)

        def _step(self, action):

            if self._episode_ended == True:
                return self._reset()

            clear_output(wait=True)
            if self._turnNumber % self.addRewardEveryNTurns == 0:
                self.addMapReward()
            if self._turnNumber % self.addBombEveryNTurns == 0:
                self.addBombToMap()

            ## make sure action does produce exceed range
            # if action > 5 or action <1:
            #    action =0
            self.input = action  ## value 1 to 4
            self.processMove()
            self._turnNumber += 1

            if self.reward == -99:
                self._turnNumber += 1
                self._episode_ended = True
                # self._reset()
                self.totalScore = 0
                self.render(self.mmap)
                return ts.termination(np.array(self.mmap, dtype=np.int32).flatten(),
                                      reward=self.reward)
            else:
                return ts.transition(np.array(self.mmap, dtype=np.int32).flatten(),
                                     reward=self.reward)  # , discount = 1.0

        def run(self):
            self._reset()
            self.render(self.mmap)
            while (True):
                self.runTurn()
                if self.input == 5:
                    return ("EXIT on input x ")


    env = cGame()

`

Driver Code
`

    from tf_agents.specs import tensor_spec
    from tf_agents.networks import sequential
    from tf_agents.agents.dqn import dqn_agent
    from tf_agents.utils import common
    from tf_agents.policies import py_tf_eager_policy
    from tf_agents.policies import random_tf_policy
    import reverb
    from tf_agents.replay_buffers import reverb_replay_buffer
    from tf_agents.replay_buffers import reverb_utils
    from tf_agents.trajectories import trajectory
    from tf_agents.drivers import py_driver
    from tf_agents.environments import BatchedPyEnvironment
    
    
    class mTrainer:
        def __init__(self):
    
            self.returns = None
            self.train_env = tf_py_environment.TFPyEnvironment(cGame())
            self.eval_env = tf_py_environment.TFPyEnvironment(cGame())
    
            self.num_iterations = 20000  # @param {type:"integer"}
            self.initial_collect_steps = 100  # @param {type:"integer"}
            self.collect_steps_per_iteration = 100  # @param {type:"integer"}
            self.replay_buffer_max_length = 100000  # @param {type:"integer"}
            self.batch_size = 64  # @param {type:"integer"}
            self.learning_rate = 1e-3  # @param {type:"number"}
            self.log_interval = 200  # @param {type:"integer"}
            self.num_eval_episodes = 10  # @param {type:"integer"}
            self.eval_interval = 1000  # @param {type:"integer"}
    
        def createAgent(self):
            fc_layer_params = (100, 50)
            action_tensor_spec = tensor_spec.from_spec(self.train_env.action_spec())
            num_actions = action_tensor_spec.maximum - action_tensor_spec.minimum + 1
    
            def dense_layer(num_units):
                return tf.keras.layers.Dense(
                    num_units,
                    activation=tf.keras.activations.relu,
                    kernel_initializer=tf.keras.initializers.VarianceScaling(
                        scale=2.0, mode='fan_in', distribution='truncated_normal'))
    
            dense_layers = [dense_layer(num_units) for num_units in fc_layer_params]
            q_values_layer = tf.keras.layers.Dense(
                num_actions,
                activation=None,
                kernel_initializer=tf.keras.initializers.RandomUniform(
                    minval=-0.03, maxval=0.03),
                bias_initializer=tf.keras.initializers.Constant(-0.2))
    
            self.q_net = sequential.Sequential(dense_layers + [q_values_layer])
    
            optimizer = tf.keras.optimizers.Adam(learning_rate=self.learning_rate)
            # rain_step_counter = tf.Variable(0)
    
            self.agent = dqn_agent.DqnAgent(
                time_step_spec=self.train_env.time_step_spec(),
                action_spec=self.train_env.action_spec(),
                q_network=self.q_net,
                optimizer=optimizer,
                td_errors_loss_fn=common.element_wise_squared_loss,
                train_step_counter=tf.Variable(0))
    
            self.agent.initialize()
    
            self.eval_policy = self.agent.policy
            self.collect_policy = self.agent.collect_policy
            self.random_policy = random_tf_policy.RandomTFPolicy(
                self.train_env.time_step_spec(), self.train_env.action_spec())
            return True
    
        def compute_avg_return(self, environment, policy, num_episodes=10):
            # mT.compute_avg_return(mT.eval_env, mT.random_policy, 50)
            total_return = 0.0
            for _ in range(num_episodes):
                time_step = environment.reset()
                episode_return = 0.0
                while not time_step.is_last():
                    action_step = policy.action(time_step)
                    time_step = environment.step(action_step.action)
                    episode_return += time_step.reward
                total_return += episode_return
            avg_return = total_return / num_episodes
            print('average return :', avg_return.numpy()[0])
            return avg_return.numpy()[0]
    
        def create_replaybuffer(self):
    
            table_name = 'uniform_table'
            replay_buffer_signature = tensor_spec.from_spec(
                self.agent.collect_data_spec)
            replay_buffer_signature = tensor_spec.add_outer_dim(
                replay_buffer_signature)
    
            table = reverb.Table(table_name,
                                 max_size=self.replay_buffer_max_length,
                                 sampler=reverb.selectors.Uniform(),
                                 remover=reverb.selectors.Fifo(),
                                 rate_limiter=reverb.rate_limiters.MinSize(1),
                                 signature=replay_buffer_signature)
    
            reverb_server = reverb.Server([table])
    
            self.replay_buffer = reverb_replay_buffer.ReverbReplayBuffer(
                self.agent.collect_data_spec,
                table_name=table_name,
                sequence_length=2,
                local_server=reverb_server)
    
            self.rb_observer = reverb_utils.ReverbAddTrajectoryObserver(
                self.replay_buffer.py_client,
                table_name,
                sequence_length=2)
    
            self.dataset = self.replay_buffer.as_dataset(num_parallel_calls=3,
                                                         sample_batch_size=self.batch_size,
                                                         num_steps=2).prefetch(3)
            self.iterator = iter(self.dataset)
    
        def testReplayBuffer(self):
            py_env = cGame()
            py_driver.PyDriver(
                py_env,
                py_tf_eager_policy.PyTFEagerPolicy(
                    self.random_policy,
                    use_tf_function=True),
                [self.rb_observer],
                max_steps=self.initial_collect_steps).run(self.train_env.reset())
    
        def trainAgent(self):
    
            self.returns = list()
            print(self.collect_policy)
            py_env = cGame()
            # Create a driver to collect experience.
            collect_driver = py_driver.PyDriver(
                py_env, # CHANGE 1
                py_tf_eager_policy.PyTFEagerPolicy(
                    self.agent.collect_policy,
                    # batch_time_steps=False, # CHANGE 2
                    use_tf_function=True),
                [self.rb_observer],
                max_steps=self.collect_steps_per_iteration)
    
            # Reset the environment.
            # time_step = self.train_env.reset()
            time_step = py_env.reset()
            for _ in range(self.num_iterations):
    
                # Collect a few steps and save to the replay buffer.
                time_step, _ = collect_driver.run(time_step)
    
                # Sample a batch of data from the buffer and update the agent's network.
                experience, unused_info = next(self.iterator)
                train_loss = self.agent.train(experience).loss
    
                step = self.agent.train_step_counter.numpy()
    
                if step % self.log_interval == 0:
                    print('step = {0}: loss = {1}'.format(step, train_loss))
    
                if step % self.eval_interval == 0:
                    avg_return = self.compute_avg_return(self.eval_env,
                                                         self.agent.policy,
                                                         self.num_eval_episodes)
                    print(
                        'step = {0}: Average Return = {1}'.format(step, avg_return))
                    self.returns.append(avg_return)
    
        def run(self):
            self.createAgent()
            # self.compute_avg_return(self.train_env,self.eval_policy)
            self.create_replaybuffer()
            # self.testReplayBuffer()
            self.trainAgent()
            return True
    
    if __name__ == '__main__':
        mT = mTrainer()
        mT.run()

`

从表&#x27; uniform_table接收到扁平索引4的不兼容张量。

洋洋洒洒 2025-02-18 03:06:58

Solution in Code:

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service 
from webdriver_manager.chrome import ChromeDriverManager


# 1
option = Options()
option.binary_location='/Applications/Google Chrome 
Beta.app/Contents/MacOS/Google Chrome Beta'

# 2
driver = webdriver.Chrome(service=Service(ChromeDriverManager(version='104.0.5112.20').install()), options=option)

see:

Solution in Code:

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service 
from webdriver_manager.chrome import ChromeDriverManager


# 1
option = Options()
option.binary_location='/Applications/Google Chrome 
Beta.app/Contents/MacOS/Google Chrome Beta'

# 2
driver = webdriver.Chrome(service=Service(ChromeDriverManager(version='104.0.5112.20').install()), options=option)

see: this thread

Selenium / Seleniumwire未知错误:无法确定未知错误的加载状态:意外命令响应

洋洋洒洒 2025-02-18 01:48:40

就像我说的那样,我认为,最好的解决方案是在数据库中设置正确的规则并创建正确的查询以获取该数据。

规则:

rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    match /{document=**} {
      allow read, write: if false;
    }
    match /bookings/{docId} {
      allow read: if resource.data.uid == request.auth.uid || isAdmin()
      // bellow you can use second part after && but im not sure are it will be null or unassigned this is overenginered so you can just not use condition after &&.
      allow update: if resource.data.uid == request.auth.uid && request.resource.data.uid == null || isAdmin()
      allow create: if request.auth != null && request.resource.data.uid == request.auth.uid || isAdmin()
      allow delete: if isAdmin()
    }
  }
}

function isAdmin() {
    return request.auth.token.admin == true;
}

您需要对用户进行查询:

getBookings() {
  // Im not sure are it will work like that in flutter im not a flutter programmer.
  // You need to specify using where() method that you want documents with your uid or rules will not allow you to get eny data.
  var bookings = FirebaseFirestore.instance.collection('bookings').where('uid', '==', user.uid);
  return bookings.get();
}

Like I said, in my opinion, the best solution for you is to set correct rules in database and create correct queries to get that data.

Rules:

rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    match /{document=**} {
      allow read, write: if false;
    }
    match /bookings/{docId} {
      allow read: if resource.data.uid == request.auth.uid || isAdmin()
      // bellow you can use second part after && but im not sure are it will be null or unassigned this is overenginered so you can just not use condition after &&.
      allow update: if resource.data.uid == request.auth.uid && request.resource.data.uid == null || isAdmin()
      allow create: if request.auth != null && request.resource.data.uid == request.auth.uid || isAdmin()
      allow delete: if isAdmin()
    }
  }
}

function isAdmin() {
    return request.auth.token.admin == true;
}

Queries you need to make for users:

getBookings() {
  // Im not sure are it will work like that in flutter im not a flutter programmer.
  // You need to specify using where() method that you want documents with your uid or rules will not allow you to get eny data.
  var bookings = FirebaseFirestore.instance.collection('bookings').where('uid', '==', user.uid);
  return bookings.get();
}

如何使用户读取所有文档的集合。

洋洋洒洒 2025-02-18 00:22:51

创建 pygame.mask.mask.mask.mask.mask.mask 对象并将所有位设置为 <<<代码> fill()

rect_mask = pygame.mask.Mask((rect.width, rect.height)))
rect_mask.fill()

使用 pygame.mask.mask.mask.overlap 用于掩模对象之间的碰撞检测。方法重叠()的偏移参数是 elethmask pygame.mask.mask.mask.mask.mask.mask object。

例如:

对象1: x1 y1 mask1
对象2: x2 y2 mask2

offset = (x2 - x1, y2 - y1)
if mask1.overlap(mask2, offset):
    print("hit")    

另请参阅 pygame与蒙版碰撞

Create a pygame.mask.Mask object and set all bits 1 with fill()

rect_mask = pygame.mask.Mask((rect.width, rect.height)))
rect_mask.fill()

Use pygame.mask.Mask.overlap to for the collision detection between Mask objects. The offset parameter of the method overlap() is the relative position of the othermask in relation to the pygame.mask.Mask object.

e.g.:

object 1: x1, y1, mask1
object 2: x2, y2, mask2

offset = (x2 - x1, y2 - y1)
if mask1.overlap(mask2, offset):
    print("hit")    

See also PyGame collision with masks and Pygame mask collision.

我如何制作一个一个者的面具?

洋洋洒洒 2025-02-17 15:01:49

当您将物品添加到 canvas 小部件时,使用其 create_xxx()方法之一,例如 create_image() ,返回整数href =“ https://tkdocs.com/shipman/canvas-methods.html” rel =“ nofollow noreferrer”> move> move> move() 方法。
这就是您需要做的一切来更新所显示的内容,不需要明确的“刷新”步骤。

例如,如果您最初在某个地方放置了典当图像:

pawn_id = canvas.create_image(50+(x*100), 50+(y*100), image=blackPawnImg)

您可以将Pawn Image 10像素移动到右侧,而将20个像素向下移动(请注意,移动量相对于其当前位置指定):

canvas.move(pawn_id, 10, 20)

您也可以使用更多的通用 itemConfigure()更改与 Canvas 项目相关联的其他选项(例如用 create_line())创建的行的颜色。

When you add things to a Canvas widget, using one of its create_xxx() methods, like create_image(), an integer id number is returned which you could use to move the object later via its move() method.
That's all you have need to do to update what's being displayed, no explicit "refresh" step is needed.

For example, if you initially placed a pawn image somewhere using:

pawn_id = canvas.create_image(50+(x*100), 50+(y*100), image=blackPawnImg)

You could move the pawn image 10 pixels to the right and 20 down via (note that the movement amounts are specified relative to its current position):

canvas.move(pawn_id, 10, 20)

You can also use the more generic itemconfigure() method to change other options associated with Canvas items (such as the color of a line created with create_line()).

如何在TKINTER中更新帆布和图像?

洋洋洒洒 2025-02-17 00:30:00

可以尝试两件事:

  1. 删除 json.dumps() data 上调用。 put_record_batch()方法期望 data 字段的基本64编码的二进制数据对象。 json.dumps()返回字符串。
  2. 分组为500。

示例:

connection = pymysql.connect(host = endpoint, user = username, passwd = password, db = database_name)

FIREHOSE_STREAM = 'DEMOLAMBDAFIREHOSE'
client = boto3.client('firehose')

def lambda_handler(event, context):
    cursor = connection.cursor()
    cursor.execute('SELECT * from inventory.report_product')
    rows = cursor.fetchall()

    records = []
    for row in rows:
        if len(records) < 500:
            records.append({
                'Data': base64.b64encode(row)
            })
        else:
            # call put_record_batch on previous 500 rows
            response = client.put_record_batch(
                DeliveryStreamName=FIREHOSE_STREAM,
                Records=records
            )
            print (response)

            # clear records and add current row
            records = []
            records.append({
                'Data': base64.b64encode(row)
            })

    if len(records) > 0:
        # send the final batch, call put_batch_record()

警告:未测试此代码示例。

Two things to try:

  1. Remove json.dumps() call on data. The put_record_batch() method expects a base64-encoded binary data object for the Data field. json.dumps() returns a string.
  2. Batch rows in groups of 500. The put_record_batch() method supports batching up to 500 records.

Example:

connection = pymysql.connect(host = endpoint, user = username, passwd = password, db = database_name)

FIREHOSE_STREAM = 'DEMOLAMBDAFIREHOSE'
client = boto3.client('firehose')

def lambda_handler(event, context):
    cursor = connection.cursor()
    cursor.execute('SELECT * from inventory.report_product')
    rows = cursor.fetchall()

    records = []
    for row in rows:
        if len(records) < 500:
            records.append({
                'Data': base64.b64encode(row)
            })
        else:
            # call put_record_batch on previous 500 rows
            response = client.put_record_batch(
                DeliveryStreamName=FIREHOSE_STREAM,
                Records=records
            )
            print (response)

            # clear records and add current row
            records = []
            records.append({
                'Data': base64.b64encode(row)
            })

    if len(records) > 0:
        # send the final batch, call put_batch_record()

WARNING: This code example is not tested.

使用Python将数据从AWS lambda推到运动式消防方

洋洋洒洒 2025-02-16 12:09:01

您的线路缺少空格。

如果[$ bday == $ dates]

在这里,shell正在搜索一个名为的命令[$ bday 不存在并引发错误。 应该是

相反,如果[$ bday == $ dates]

Your line here is missing whitespace.

if [$bday == $dates]

Here the shell is searching for a command named [$bday which doesn't exist and throws an error. It should instead be

if [ $bday == $dates ]

运行脚本变量时找不到命令

洋洋洒洒 2025-02-16 09:08:52

我能够通过从发布的所有答案中获取建议来解决问题。谢谢。

以下是代码 -

function mergeOverlappingIntervals(intervals)
    sort!(intervals, by = x -> intervals[1])
    new_interval = [intervals[1]]
    for i in range(2, length(intervals))
        if new_interval[end][2] >= intervals[i][1]
            new_interval[end] = [min(new_interval[end][1], intervals[i][1]), max(new_interval[end][2], intervals[i][2])]
        else
            push!(new_interval, intervals[i])
        end
    end
    return new_interval
end

I was able to solve the problem by taking the suggestions from all the answers posted. Thank you.

Below is the code -

function mergeOverlappingIntervals(intervals)
    sort!(intervals, by = x -> intervals[1])
    new_interval = [intervals[1]]
    for i in range(2, length(intervals))
        if new_interval[end][2] >= intervals[i][1]
            new_interval[end] = [min(new_interval[end][1], intervals[i][1]), max(new_interval[end][2], intervals[i][2])]
        else
            push!(new_interval, intervals[i])
        end
    end
    return new_interval
end

朱莉娅 - last()在功能中不起作用

洋洋洒洒 2025-02-15 17:28:15

您确定需要一个专用的宏吗?

我只需映射并收集一系列字面的元组。

还是这样做的功能就足够了?

use num_complex::Complex;
use num_traits::float::Float;

fn vec_cplx<F: Float, const N: usize>(
    tuples: [(F, F); N]
) -> Vec<Complex<F>> {
    tuples
        .into_iter()
        .map(|(re, im)| Complex { re, im })
        .collect()
}

fn main() {
    let v1_f64: Vec<Complex<f64>> = [(1.1, 2.2), (3.3, 4.4), (5.5, 6.6)]
        .into_iter()
        .map(|(re, im)| Complex { re, im })
        .collect();
    println!("{:?}", v1_f64);
    //
    let v1_f32: Vec<Complex<f32>> = [(1.1, 2.2), (3.3, 4.4), (5.5, 6.6)]
        .into_iter()
        .map(|(re, im)| Complex { re, im })
        .collect();
    println!("{:?}", v1_f32);
    //
    let v2_f64: Vec<Complex<f64>> =
        vec_cplx([(1.1, 2.2), (3.3, 4.4), (5.5, 6.6)]);
    println!("{:?}", v2_f64);
    //
    let v2_f32: Vec<Complex<f32>> =
        vec_cplx([(1.1, 2.2), (3.3, 4.4), (5.5, 6.6)]);
    println!("{:?}", v2_f32);
}

Are you certain you need a dedicated macro?

I would simply map and collect a literal array of tuples.

Or maybe a function doing so would be sufficient?

use num_complex::Complex;
use num_traits::float::Float;

fn vec_cplx<F: Float, const N: usize>(
    tuples: [(F, F); N]
) -> Vec<Complex<F>> {
    tuples
        .into_iter()
        .map(|(re, im)| Complex { re, im })
        .collect()
}

fn main() {
    let v1_f64: Vec<Complex<f64>> = [(1.1, 2.2), (3.3, 4.4), (5.5, 6.6)]
        .into_iter()
        .map(|(re, im)| Complex { re, im })
        .collect();
    println!("{:?}", v1_f64);
    //
    let v1_f32: Vec<Complex<f32>> = [(1.1, 2.2), (3.3, 4.4), (5.5, 6.6)]
        .into_iter()
        .map(|(re, im)| Complex { re, im })
        .collect();
    println!("{:?}", v1_f32);
    //
    let v2_f64: Vec<Complex<f64>> =
        vec_cplx([(1.1, 2.2), (3.3, 4.4), (5.5, 6.6)]);
    println!("{:?}", v2_f64);
    //
    let v2_f32: Vec<Complex<f32>> =
        vec_cplx([(1.1, 2.2), (3.3, 4.4), (5.5, 6.6)]);
    println!("{:?}", v2_f32);
}

如何编写宏以将元素列表转换为复数的VEC

洋洋洒洒 2025-02-15 06:42:41

如果您使用的是Spring Boot,它将使用application.yml/properties中的属性自动配置容器工厂。

如果您不使用Spring Boot,则必须定义自己的工厂。

If you are using Spring Boot, it automatically configures a container factory using the properties in application.yml/properties.

If you are NOT using Spring Boot, you have to define your own factory.

我什么时候需要定义JMSListenerContainerFactory Bean?

更多

推荐作者

櫻之舞

文章 0 评论 0

弥枳

文章 0 评论 0

m2429

文章 0 评论 0

野却迷人

文章 0 评论 0

我怀念的。

文章 0 评论 0

更多

友情链接

    我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
    原文