aboutsummaryrefslogtreecommitdiff
path: root/math/py-gym
diff options
context:
space:
mode:
authorSunpoet Po-Chuan Hsieh <sunpoet@FreeBSD.org>2019-06-21 23:08:45 +0000
committerSunpoet Po-Chuan Hsieh <sunpoet@FreeBSD.org>2019-06-21 23:08:45 +0000
commit751c702048755415b8406250122ae5443b0f84a7 (patch)
treee18c2c5df7000501c618bcade7841cc9b9431ee9 /math/py-gym
parent8515ff13fb3470ca1b5e855385a7210098f2c95e (diff)
downloadports-751c702048755415b8406250122ae5443b0f84a7.tar.gz
ports-751c702048755415b8406250122ae5443b0f84a7.zip
Add py-gym 0.12.5
OpenAI Gym is a toolkit for developing and comparing reinforcement learning algorithms. This is the gym open-source library, which gives you access to a standardized set of environments. gym makes no assumptions about the structure of your agent, and is compatible with any numerical computation library, such as TensorFlow or Theano. You can use it from Python code, and soon from other languages. There are two basic concepts in reinforcement learning: the environment (namely, the outside world) and the agent (namely, the algorithm you are writing). The agent sends actions to the environment, and the environment replies with observations and rewards (that is, a score). The core gym interface is Env, which is the unified environment interface. There is no interface for agents; that part is left to you. The following are the Env methods you should know: - reset(self): Reset the environment's state. Returns observation. - step(self, action): Step the environment by one timestep. Returns observation, reward, done, info. - render(self, mode='human'): Render one frame of the environment. The default mode will do something human friendly, such as pop up a window. WWW: https://gym.openai.com/ WWW: https://github.com/openai/gym
Notes
Notes: svn path=/head/; revision=504818
Diffstat (limited to 'math/py-gym')
-rw-r--r--math/py-gym/Makefile27
-rw-r--r--math/py-gym/distinfo3
-rw-r--r--math/py-gym/pkg-descr24
3 files changed, 54 insertions, 0 deletions
diff --git a/math/py-gym/Makefile b/math/py-gym/Makefile
new file mode 100644
index 000000000000..e293820443d3
--- /dev/null
+++ b/math/py-gym/Makefile
@@ -0,0 +1,27 @@
+# Created by: Po-Chuan Hsieh <sunpoet@FreeBSD.org>
+# $FreeBSD$
+
+PORTNAME= gym
+PORTVERSION= 0.12.5
+CATEGORIES= math python
+MASTER_SITES= CHEESESHOP
+PKGNAMEPREFIX= ${PYTHON_PKGNAMEPREFIX}
+
+MAINTAINER= sunpoet@FreeBSD.org
+COMMENT= OpenAI toolkit for developing and comparing your reinforcement learning agents
+
+LICENSE= MIT
+
+RUN_DEPENDS= ${PYTHON_PKGNAMEPREFIX}numpy>=1.10.4:math/py-numpy@${PY_FLAVOR} \
+ ${PYTHON_PKGNAMEPREFIX}pyglet>=0:graphics/py-pyglet@${PY_FLAVOR} \
+ ${PYTHON_PKGNAMEPREFIX}scipy>=0:science/py-scipy@${PY_FLAVOR} \
+ ${PYTHON_PKGNAMEPREFIX}six>=0:devel/py-six@${PY_FLAVOR}
+TEST_DEPENDS= ${PYTHON_PKGNAMEPREFIX}mock>=0:devel/py-mock@${PY_FLAVOR} \
+ ${PYTHON_PKGNAMEPREFIX}pytest>=0:devel/py-pytest@${PY_FLAVOR}
+
+USES= python
+USE_PYTHON= autoplist concurrent distutils
+
+NO_ARCH= yes
+
+.include <bsd.port.mk>
diff --git a/math/py-gym/distinfo b/math/py-gym/distinfo
new file mode 100644
index 000000000000..f57e8cf90bb2
--- /dev/null
+++ b/math/py-gym/distinfo
@@ -0,0 +1,3 @@
+TIMESTAMP = 1561148961
+SHA256 (gym-0.12.5.tar.gz) = 027422f59b662748eae3420b804e35bbf953f62d40cd96d2de9f842c08de822e
+SIZE (gym-0.12.5.tar.gz) = 1544308
diff --git a/math/py-gym/pkg-descr b/math/py-gym/pkg-descr
new file mode 100644
index 000000000000..291faba27a40
--- /dev/null
+++ b/math/py-gym/pkg-descr
@@ -0,0 +1,24 @@
+OpenAI Gym is a toolkit for developing and comparing reinforcement learning
+algorithms. This is the gym open-source library, which gives you access to a
+standardized set of environments.
+
+gym makes no assumptions about the structure of your agent, and is compatible
+with any numerical computation library, such as TensorFlow or Theano. You can
+use it from Python code, and soon from other languages.
+
+There are two basic concepts in reinforcement learning: the environment (namely,
+the outside world) and the agent (namely, the algorithm you are writing). The
+agent sends actions to the environment, and the environment replies with
+observations and rewards (that is, a score).
+
+The core gym interface is Env, which is the unified environment interface. There
+is no interface for agents; that part is left to you. The following are the Env
+methods you should know:
+- reset(self): Reset the environment's state. Returns observation.
+- step(self, action): Step the environment by one timestep. Returns observation,
+ reward, done, info.
+- render(self, mode='human'): Render one frame of the environment. The default
+ mode will do something human friendly, such as pop up a window.
+
+WWW: https://gym.openai.com/
+WWW: https://github.com/openai/gym