Psycho Py Manual
Psycho Py Manual
Release 1.65.01
Jonathan Peirce
CONTENTS
Overview 1.1 Features . . . . . . . . 1.2 Hardware Integration . 1.3 System requirements . 1.4 How to cite PsychoPy 1.5 Help PsychoPy . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
3 3 4 4 4 4 5 5 5 5 7 7 7 7 9 9 9 9 10 10 11 13 15 15 16 17 19 21 22 27 27 29 29 31 i
Contributing to the project 2.1 Why make it free? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 How do I contribute changes? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Contribute to the Forum (mailing list) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Credits 3.1 Developers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Included packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Funding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation 4.1 Recommended hardware 4.2 Windows: . . . . . . . . 4.3 Mac OS X: . . . . . . . 4.4 Linux: . . . . . . . . . 4.5 Dependencies . . . . . . 4.6 Suggested packages . . Getting Started General issues 6.1 Monitor Center . . . . . . . . . . . 6.2 Units for the window and stimuli . 6.3 Color spaces . . . . . . . . . . . . 6.4 Preferences . . . . . . . . . . . . . 6.5 Data outputs . . . . . . . . . . . . 6.6 Timing Issues and synchronisation . Builder 7.1 Builder concepts 7.2 Routines . . . . 7.3 Flow . . . . . . 7.4 Components . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
5 6
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
Experiment settings . . . . . . . Generating outputs (datales) . . Common mistakes (aka gotchas) Future developments . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
38 39 40 40 41 41 45 53 53 53 54 54 54 57 57 58 58 61 61 61 63 63 63 65 65 65 93 105 107 109 111 120 121 124 125 131 131 132 133
Coder 8.1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Tutorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Troubleshooting 9.1 The application doesnt start . . . . . . . . . . . 9.2 I run a Builder experiment and nothing happens . 9.3 Manually turn off the viewing of output . . . . . 9.4 Use the source (Luke?) . . . . . . . . . . . . . . 9.5 Cleaning preferences and app data . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
10 Recipes (How-tos) 10.1 Builder - providing feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Generating formatted strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Coder - interleave staircases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Frequently Asked Questions (FAQs) 11.1 Why is the bits++ demo not working? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Can PsychoPy run my experiment with sub-millisecond timing? . . . . . . . . . . . . . . . . . . . . 12 Resources (e.g. for teaching) 12.1 Upcoming events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Previous events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Reference Manual (API) 13.1 psychopy.core - basic functions (clocks etc.) . . . . . . . . . . . . . 13.2 psychopy.visual - many visual stimuli . . . . . . . . . . . . . . . 13.3 psychopy.data - functions for storing/saving/analysing data . . . . . 13.4 psychopy.event - for getting keypress and mouse clicks . . . . . . . 13.5 psychopy.filters - helper functions for creating lters . . . . . . . 13.6 psychopy.gui - create dialogue boxes . . . . . . . . . . . . . . . . . 13.7 psychopy.hardware - hardware interfaces . . . . . . . . . . . . . . 13.8 psychopy.info - functions for getting information about the system . 13.9 psychopy.log - control what gets logged . . . . . . . . . . . . . . . 13.10 psychopy.misc - miscellaneous routines for converting units etc . . . 13.11 psychopy.monitors - for those that dont like Monitor Center . . . 13.12 psychopy.parallel - functions for interacting with the parallel port 13.13 psychopy.serial - functions for interacting with the serial port . . . 13.14 psychopy.sound - play various forms of sound . . . . . . . . . . . . 13.15 Indices and tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
14 For Developers 135 14.1 Using the repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 14.2 Adding a new Builder Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 15 PsychoPy Experiment le format (.psyexp) 15.1 Parameters . . . . . . . . . . . . . . . 15.2 Settings . . . . . . . . . . . . . . . . . 15.3 Routines . . . . . . . . . . . . . . . . 15.4 Components . . . . . . . . . . . . . . 141 141 141 142 142
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
ii
15.5 Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 15.6 Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 16 Glossary 17 Indices Python Module Index Index 145 147 149 151
iii
iv
Contents:
CONTENTS
CONTENTS
CHAPTER
ONE
OVERVIEW
PsychoPy is an open-source package for running experiments in Python (a real and free alternative to Matlab). PsychoPy combines the graphical strengths of OpenGL with the easy Python syntax to give scientists a free and simple stimulus presentation and control package. It is used by many labs worldwide for psychophysics, cognitive neuroscience and experimental psychology. Because its open source, you can download it and modify the package if you dont like it. And if you make changes that others might use then please consider giving them back to the community via the mailing list. PsychoPy has been written and provided to you absolutely for free. For it to get better it needs as much input from everyone as possible.
1.1 Features
There are many advantages to using PsychoPy, but here are some of the key ones Simple install process Huge variety of stimuli (see screenshots) generated in real-time: linear gratings, bitmaps constantly updating radial gratings random dots movies (DivX, mov, mpg...) text (unicode in any truetype font) shapes sounds (tones, numpy arrays, wav, ogg...) Platform independent - run the same script on Win, OS X or Linux Flexible stimulus units (degrees, cm, or pixels) Coder interface for those that like to program Builder interface for those that dont Input from keyboard, mouse or button boxes Multi-monitor support Automated monitor calibration (requires PR650 or Minolta LS110)
PsychoPy supports communication via serial ports, parallel ports and compiled drivers (dlls and dylibs), so it can talk to any ha Spectrascan PR650 Minolta LS110 Cambridge Research Systems Bits++ Cedrus response boxes (RB7xx series)
Chapter 1. Overview
CHAPTER
TWO
if there is an error message, try to provide it completely If you had problems and worked out how to x things, even if it turned out the problem was your own lack of understanding, please still contribute the information. Others are likely to have similar problems. Maybe the documentation could be clearer, or your email to the forum will be found by others googling for the same problem. To make your message more useful you should, please try to: provide info about your system and PsychoPy version(e.g. the output of the sysInfo demo in coder). A lot of problems are specic to a particular graphics card or platform provide a minimal example of the breaking code (if youre writing scripts)
CHAPTER
THREE
CREDITS
3.1 Developers
PsychoPy is predominantly written and maintained by Jon Peirce_ but has received code from a number of contributors: Jeremy Gray (various aspects of code and ideas) Yaroslav Halchenko (building the Debian package and a lot more) Dave Britton Ariel Rokem Gary Strangman C Luhmann
3.3 Funding
PsychoPy project has attracted small grants from the HEA Psychology Network and Cambridge Research Systems . Thanks to those organisations for their support. Jon is paid by The University of Nottingham, and has been funded by the BBSRC
Chapter 3. Credits
CHAPTER
FOUR
INSTALLATION
Like many python modules PsychoPy is built and dependent on a number of other libraries (OpenGL, numpy...). Details on how to install those are below.
4.2 Windows:
If youre new to python then you probably want to install the standalone package. This includes a copy of python and all the dependent libraries (if you do have python already installed, that wont be touched by this installation). Once installed, youll now nd a link to the PsychoPy application in >Start>Progams>PsychoPy2. Click that and then on the demos menu to get going. You should make sure you have reasonably current drivers for your graphics card (download the latest from the vendor, rather than using the pre-installed windows drivers). The standalone installer adds the PsychoPy folder to your path, so you can run the included version of python from the command line etc. If you have your own version of python installed as well then you need to check which one is run by default, and change your path according to your personal preferences.
4.3 Mac OS X:
There are different ways to install PsychoPy on a mac that will suit different users Intel Mac users (with OS X v10.5) can simply download the standalone application bundle (the dmg le) and drag it to their Applications folder. The app bundle contains its own independent python and all the dependencies and will not interact with anything else on your system (except its own preferences).
Users of macports can install PsychoPy and all its dependencies simply with: sudo port install py25-psychopy (thanks James Kyles for that). For PPC macs (or for intel mac users that want their own custom python for running PsychoPy) you need to install the dependencies and PsychoPy manually. The easiest way is to use the Enthought Python Distribution. Its free (for academic use) and the only things it misses are avbin (if you want to play movies) and pygame (for sound reproduction). You could alternatively manually install the framework build of python and download all the dependencies below. One advantage to this is that you can then upgrade versions with:
sudo /usr/local/bin/easy_install-2.5 -N -Z -U psychopy
4.4 Linux:
For Debian users, PsychoPy is in the Debian packages index so you can simply do:
sudo apt-get install psychopy
For Debian-based distributions (e.g. Ubuntu): 1. Add the following sources in Synaptic, in the Conguration>Repository dialog box, under Other software:
deb http://neuro.debian.net/debian karmic main contrib non-free deb-src http://neuro.debian.net/debian karmic main contrib non-free
2. Then follow the Package authentication procedure described in http://neuro.debian.net/ 3. Then install the psychopy package under Synaptic or through sudo apt-get install psychopy which will install all dependencies. For non-Debian systems you need to install the dependencies below manually and then PsychoPy (with easy_install?). Thanks to Yaroslav Halchenko for his work on the Debian package.
4.5 Dependencies
If you want to install each library individually rather than use the simple distributions of packages above then you can download the following. Make sure you get the correct version for your OS and your version of Python. Python (2.4.x or 2.5.x, NOT version 3) setuptools numpy (version 0.9.6 or greater) scipy (version 0.4.8 or greater) pyglet (version 1.1 or greater) pygame (for playing sounds. Must be version 1.8 or greater) pywin32 (only needed for Windows) wxPython (version 2.8 or greater) Python Imaging Library (easier to install with setuptools/easy_install) matplotlib (for plotting stuff)
10
Chapter 4. Installation
winioport (to use the parallel port, win32 only) ctypes (this is already included in python 2.5) lxml (needed for printing saving builder experiment les)
11
12
Chapter 4. Installation
CHAPTER
FIVE
GETTING STARTED
PsychoPy has three main components; the application Coder view, the Builder view and an underlying API programming library. These can be used in various ways depending on the users preference and experience: 1. Builder view. For those that prefer not to program, and for those new to Python, you can generate a wide range of experiments easily from the Builder. This has an intuitive, graphical user interface (GUI). You can always export your experiment to a script for ne-tuning, and this might be an ideal way for experienced programmers to learn the syntax of python 2. Coder view For those comfortable with programming, but maybe inexperienced with Python, the Coder view is ideal. This is a relative basic editor but does support syntax highlighting and code folding etc... It also has a demo menu where you can checkout a wide variety of PsychoPy scripts to get you started. 3. The API Experienced python programmers can simply import the libraries and use like any other package (the Coder tutorials and demos should help get you going and the API reference will give you the details). The Builder and Coder views are both components of the PsychoPy app. If youve installed the standalone version of PsychoPy on MS Windows then their should be an obvious link to PsychoPy in your >Start>Programs. If you installed the standalone version on OS X then the app is where you dragged it (!). On these two platforms you can open the Builder and Coder views from the View menu and the default view can be set from the preferences. If the PsychoPy app is created with ags coder (or -c), or builder (or -b) e.g. on Linux, then the preferences will be overridden and that view will be created as the app opens.
13
14
CHAPTER
SIX
GENERAL ISSUES
6.1 Monitor Center
PsychoPy provides a simple and intuitive way for you to calibrate your monitor and provide other information about it and then import that information into your experiment. Information is inserted in the Monitor Center (Tools menu), which allows you to store information about multiple monitors and keep track of multiple calibrations for the same monitor. For experiments written in the Builder view, you can then import this information by simply specifying the name of the monitor that you wish to use in the Experiment settings dialog. For experiments created as scripts you can retrieve the information when creating the Window by simply naming the monitor that you created in Monitor Center. e.g.:
from psychopy import visual win = visual.Window([1024,768], mon=SonyG500)
Of course, the name of the monitor in the script needs to match perfectly the name given in the Monitor Center.
The two additional tables in the Calibration box of the Monitor Center provide conversion from DKL and LMS colour spaces to RGB.
16
Assumes : pixels are square. Can be veried by drawing a stimulus with matching width and height and verifying that it is in fact square. For a CRT this can be controlled by setting the size of the viewable screen (settings on the monitor itself).
17
18
In PsychoPy these values are specied in units of degrees for elevation and azimuth and as a oat (ranging -1:1) for the contrast. Examples: [90,0,1] is white (maximum elevation aligns the color with the luminance axis) [0,0,1] is an isoluminant stimulus, with azimuth 0 (S-axis) [0,45,1] is an isoluminant stimulus,with an oblique azimuth
6.4 Preferences
6.4.1 General settings
winType: PsychoPy can use one of two backends for creating windows and drawing; pygame and pyglet. Here you can set the default backend to be used. 6.4. Preferences 19
units: Default units for windows and visual stimuli (deg, norm, cm, pix). See Units for the window and stimuli. Can be overidden by individual experiments. fullscr: Should windows be created full screen by default? Can be overidden by individual experiments. allowGUI: When the window is created, should the frame of the window and the mouse pointer be visible. If set to False then both will be hidden.
20
6.5.1 Log le
Log les are actually rather difcult to use for data analysis but provide a chronological record of everything that happened during your study. The level of content in them depends on you. See Logging data for further information.
21
from psychopy.misc import fromFile datFile = fromFile(fileName.psydat) #get info (added when the handler was created) print datFile.extraInfo #get data print datFile.data #get list of conditions conditions = datFile.trialList for condN, condition in enumerate(conditions): print condition, datFile.data[response][condN], numpy.mean(datFile.data[response][condN])
So, yes, PsychoPys temporal precision can be very good, but the overall accuracy is likely to be severely limited by your experimental hardware. Below are some further details on timing issues. Warning: The information about timing in PsychoPy assumes that your graphics card is capable of synchronising with the monitor frame rate. For integrated Intel graphics chips (e.g. GMA 945) under Windows, this is not true and the use of those chips is not recommended for serious experimental use as a result. Desktop systems can have a moderate graphics card added for around 30 which will be vastly superior in performance. Contents:
23
Show me all the frame times that I recorded While recording frame times, these are simply appended, every frame to win.frameIntervals (a list). You can simply plot these at the end of your script using pylab:
import pylab pylab.plot(win.frameIntervals) pylab.show()
Or you could save them to disk. A convenience function is provided for this:
win.saveFrameIntervals(fileName=None, clear=True)
The above will save the currently stored frame intervals (using the default lename, lastFrameIntervals.log) and then clears the data. The saved le is a simple text le. At any time you can also retrieve the time of the /last/ frame ip using win.lastFrameT (the time is synchronised with log.defaultClock so it will match any logging commands that your script uses). _blockingOnVBI: Blocking on the VBI As of version 1.62 PsychoPy blocks on the vertical blank interval meaning that, once Window.ip() has been called, no code will be executed until that ip actually takes place. The timestamp for the above frame interval measurements is taken immediately after the ip occurs. Run the timeByFrames demo in Coder to see the precision of these measurements on your system. They should be within 1ms of your mean frame interval. Note that Intel integrated graphics chips (e.g. GMA 945) under win32 do not sync to the screen at all and so blocking on those machines is not possible.
2. shut down as many programs, including background processes. Although modern processors are fast and often have mult anti-virus auto-updating (if youre allowed) email checking software le indexing software backup solutions (e.g. TimeMachine) Dropbox Synchronisation software
24
Writing optimal scripts 1. run in full-screen mode (rather than simply lling the screen with your window). This way the OS doesnt have to spend time working out what application is currently getting keyboard/mouse events. 2. dont generate your stimuli when you need them. Generate them in advance and then just modify them later with the methods like setContrast(), setOrientation() etc...
3. calls to the following functions are comparatively slow; they require more cpu time than most other functions and then hav (a) PatchStim.setTexture() (b) RadialStim.setTexture() (c) TextStim.setText() 4. if you dont have OpenGL 2.0 then calls to setContrast, setRGB and setOpacity will also be slow, because they also make a call to setTexture(). If you have shader support then this call is not necessary and a large speed increase will result. 5. avoid loops in your python code (use numpy arrays to do maths with lots of elements) 6. if you need to create a large number (e.g. greater than 10) similar stimuli, then try the ElementArrayStim Possible good ideas It isnt clear that these actually make a difference, but they might). 1. disconnect the internet cable (to prevent programs performing auto-updates?) 2. on Macs you can actually shut down the Finder. It might help. See Alex Holcombes page here 3. use a single screen rather than two (probably there is some graphics card overhead in managing double the number of pixels?)
25
26
CHAPTER
SEVEN
BUILDER
Building experiments in a GUI Warning: As at version v1.61.00, the builder view is very much beta-testing software. Check carefully that the stimuli and response collection are as expected. NB. PsychoPy users may be interested in the the NinStim package Contents:
27
7.1.4 Demos
There are a couple of demos included with the package, that you can nd in their own special menu. When you load these the rst thing to do is make sure the experiment settings specify the same resolution as your monitor, otherwise the screen can appear off-centred and strangely scaled. Stroop demo This runs a digital demonstration of the Stroop effect 1 . The experiment presents a series of coloured words written in coloured inks. Subjects have to report the colour of the letters for each word, but nd it harder to do so when the letters are spelling out a different (incongruous) colour. Reaction times for the congruent trials (where letter colour matches the written word) are faster than for the incongruent trials. From this demo you should note: How to setup a trial list in a .csv le How to record key presses and reaction times (using the resp Component in trial Routine) How to change a stimulus parameter on each repetition of the loop. The text and rgb values of the word Component are based on thisTrial, which represents a single iteration of the trials loop. They have been set to change every repeat (dont forget that step!) How to present instructions: just have a long-lasting TextStim and then force end of the Routine when a key is pressed (but dont bother storing the key press). Grating demo
As at version 1.50.04 (I plan to update it to demo a mini psychophysics experiment), this is a very simple demo showing how to d The stimulus orientation is governed by expInfo[ori], which is a python dictionary created in the Experiement Settings dialog The phase of the stimulus is set to change every frame and its value is determined by the value of trialClock.getTime()*2. Every Routine has a clock associated with it that gets reset at the beginning of the iteration through the Routine. There is also a globalClock that can be used in the same way. The phase of a Patch Component ranges 0-1 (and wraps to that range if beyond it). The result in this case is that the grating drifts at a rate of 2Hz.
1
Stroop, J.R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology 18: 643-662.
28
Chapter 7. Builder
7.2 Routines
An experiment consists of one or more Routines. A Routine might specify the timing of events within a trial or the presentation of instructions or feedback. Multiple Routines can then be combined in the Flow, which controls the order in which these occur and the way in which they repeat. To create a new Routine use the Experiment menu. Within a Routine there are a number of components. These components determine the occurrence of a stimulus, or the recording of a response. Any number of components can be added to a Routine. Each has its own line in the Routine view that shows when the component starts and nishes in time, and these can overlap. For now the time axis of the Routines panel is xed, representing seconds (one line is one second). This will hopefully change in the future so that units can also be number of frames (more precise) and can be scaled up or down to allow very long or very short Routines to be viewed easily. Thats on the wishlist...
7.3 Flow
In the Flow panel a number of Routines can be combined to form an experiment. For instance, your study may have a Routine that presented initial instructions and waited for a key to be pressed, followed by a Routine that presented one trial which should be repeated 5 times with various different parameters set. All of this is achieved in the Flow panel.
7.3.2 Loops
Loops control the repetition of Routines and the choice of stimulus parameters for each. PsychoPy can generate the next trial based on the method of constants or using an adaptive staircase. To insert a loop use the button on the left of the Flow panel, or the item in the Experiment menu of the Builder. The start and end of a loop is set in the same way as the location of a Routine (see above) using numbers to indicate the entry points on the time line. Loops can encompass one or more Routines and other loops (i.e. they can be nested). As with components in Routines, the loop must be given a name, which must be unique and made up of only alphanumeric characters (underscores are allowed). I would normally use a plural name, since the loop represents multiple repeats of something. For example, trials, blocks or epochs would be good names for your loops. Method of Constants Selecting a loop type of random or sequential will result in a method of constants experiment, whereby the types of trials that can occur are predetermined. In this case, a le must be provided that describes the parameters for the repeats. This should be an Excel 2007 (xlsx) le or a comma-separated-value (csv) le in which columns refer to parameters that are needed to describe stimuli etc and rows one for each type of trial. These can easily be generated from a spreadsheet package like excel. The top row should give headers; text labels describing the contents of that column (which must also not include spaces or other characters other than letters, numbers or underscores). For example, a le containing the following table:
7.2. Routines
29
ori 0 90 0 90
would represent 4 different conditions (trial types) with parameters ori, text and corrAns. Its really useful to include a column called corrAns that shows what the correct key press is going to be for this trial (if there is one). If the loop type is sequential then, on each iteration of the Routines, the next row will be selected in order, whereas under the random type the next row will be selected randomly. nReps determines how many repeats will be performed (for all conditions). All conditions will be presented once before the second repeat etc. Staircase methods The loop type staircase allows the implementation of simple up-down staircases where an intensity value is varied trialby-trial according to certain parameters. For this type of loop a correct answer must be provided from something like a Keyboard Component. Various parameters for the staircase can be set to govern how many trials will be conducted and how many correct or incorrect answers make the staircase go up or down. Accessing loop parameters from components The parameters from your loops are accessible to any component enclosed within that loop. The simplest (and default) way to address these variables is simply to call them by the name of the parameter, prepended with $ to indicate that this is the name of a variable. For example, if your Flow contains a loop with the above table as its input trial types le then you could give one of your stimuli an orientation $ori which would depend on the current trial type being presented. Example scenarios: 1. You want to loop randomly over some conditions in a loop called trials. Your conditions are stored in a csv le with headings ori, text, corrAns which you provide to this loop. You can then access these values from any component using $ori, $text, and $corrAns 2. You create a random loop called blocks and give it an excel le with a single column called movieName listing lenames to be played. On each repeat you can access this with $movieName 3. You create a staircase loop called stairs. On each trial you can access the current value in the staircase with $thisStair Note: When you set a component to use a parameter that will change (e.g on each repeat through the loop) you should remember to change the component parameter from constant to set every repeat or set every frame or it wont have any effect! Reducing namespace clutter (advanced) The downside of the above approach is that the names of trial parameters must be different between every loop, as well as not matching any of the predened names in python, numpy and PsychoPy. For example, the stimulus called movie cannot use a parameter also called movie (so you need to call it movieName). An alternative method can be used without these restrictions. If you set the Builder preference unclutteredNamespace to True you can then access the variables by referring to parameter as an attribute of the singular name of the loop prepended with this. For example, if you have a loop called trials which has the above le attached to it, then you can access the stimulus ori with $thisTrial.ori. If you have a loop called blocks you could use $thisBlock.corrAns. Now, although the name of the loop must still be valid and unique, the names of the parameters of the le do not have the same requirements (they must still not contain spaces or punctuation characters).
30
Chapter 7. Builder
7.4 Components
Routines in the Builder contain any number of components, which typically dene the parameters of a stimulus or an input/output device. The following components are available, as at version 1.50, but further components will be added in the future including a Mouse, Parallel/Serial ports, other visual stimuli (e.g. GeometricStim) and a Custom component that will allow arbitrary code to be executed.
Within your code you can use other variables and modules from the script. For example, all routines have a stopwatch-style Clo currentT = trialClock.getTime() To see what other variables you might want to use, and also what terms you need to avoid in your chunks of code, compile your script before inserting the code object and take a look at the contents of that script. Parameters The parameters of the Code Component simply specify the code that will get executed at 5 different points within the experiment. You can use as many or as few of these as you need for any Code Component: Begin Experiment: Things that need to be done just once, like importing a supporting module, initialising a variable for later use. Begin Routine: Certain things might need to be done just once at the start of a Routine e.g. at the beginning of each trial you might decide which side a stimulus will appear 7.4. Components 31
Each Frame: Things that need to updated constantly, throughout the experiment. Note that these will be exectued exactly once per video frame (on the order of every 10ms) End Routine: At the end of the Routine (eg. the trial) you may need to do additional things, like checking if the participant got the right answer End Experiment: Use this for things like saving data to disk, presenting a graph(?), resetting hardware to its original state etc. Example code uses
Set a random location for your target stimulus
There are many ways to do this, but you could add the following to the Begin Routine section of a Code Component at the top of your Routine. Then set your stimulus position to be $targetPos and set the correct answer eld of a Keyboard Component to be $corrAns (set both of these to update on every repeat of the Routine).:
if random()>0.5: targetPos=[-2.0, 0.0]#on the left corrAns=left else: targetPos=[+2.0, 0.0]#on the right corrAns=right
32
Chapter 7. Builder
Allowed keys A list of allowed keys can be inserted e.g. [m,z,1,2]. If this box is left blank then any keys will be read. Only allowed keys count as having been pressed; any other key will not be stored and will not force the end of the Routine. Note that key names (even for number keys) should be given in inverted commas, as with text parameters. Cursor keys can be accessed with up, down, etc. Store Which key press, if any, should be stored; the rst to be pressed, the last to be pressed or all that have been pressed. If the key press is to force the end of the trial then this setting is unlikely to be necessary, unless two keys happen to be pressed in the same video frame. Store correct Check this box if you wish to store whether or not this key press was correct. If so then ll in the next box that denes what would consitute a correct answer. This is given as Python code that should return True (1) or False (0). Often this correct answer will be dened in the settings of the Loops. Force end trial If this box is checked then the Routine will end as soon as one of the allowed keys is pressed. Store response time If checked then the response time will also be stored. This time will be taken from the start of keyboard checking. See Also: API reference for event
7.4. Components
33
Time Relative To Whenever the mouse state is saved (e.g. on button press or at end of trial) a time is saved too. Do you want this time to be relative to start of the Routine, or the start of the whoe experiment? See Also: API reference for mouse .._movie:
34
Chapter 7. Builder
startTime [oat or integer] The time (relative to the beginning of this Routine) that the stimulus should rst appear. duration [oat or integer] The duration for which the stimulus is presented. colour [triplet of values between -1 and 1] See Color spaces colour space [rgb, dkl or lms] See Color spaces image [a lename, a standard name (sin, sqr) or a numpy array of dimensions NxNx1 or NxNx3] This species the image that will be used as the texture for the visual patch. The image can be repeated on the patch (in either x or y or both) by setting the spatial frequency to be high (or can be stretched so that only a subset of the image appears by setting the spatial frequency to be low). Filenames can be relative or absolute paths and can refer to most image formats (e.g. , tif, jpg, bmp, png...). interpolate [True or False] If the interpolate box is checked (True) then linear interpolation will be applied when the image is rescaled to the appropriate size for the screen. Otherwise a nearest-neighbour rule will be used. mask [a lename, a standard name (gauss, circle) or a numpy array of dimensions NxNx1] The mask denes the shape and, potentially, intermediate transparency values for the patch. For values of -1 the patch will be transparent, for values of 1 it will be opaque and for 0 it will be semi-transparent. ori [degrees] The orientation of the entire patch (texture and mask) in degrees. phase [single oat or pair of values [X,Y]] The position of the texture within the mask, in both X and Y. If a single value is given it will be applied to both dimensions. The phase has units of cycles (rather than degrees or radians), wrapping at 1. As a result, setting the phase to 0,1,2... is equivalent, causing the texture to be centered on the mask. A phase of 0.25 will cause the image to shift by half a cycle (equivalent to pi radians). The advantage of this pos [[X,Y]] The position of the centre of the stimulus, in the units specied by the stimulus or window SF [[SFx, SFy] or a single value (applied to x and y)] The spatial frequency of the texture on the patch. The units are dependent on the specied units for the stimulus/window; if the units are deg then the SF units will be c/deg, if units are norm then the SF units will be cycles per stimulus. size [[sizex, sizey] or a single value (applied to x and y)] The size of the stimulus in the given units of the stimulus/window. If the mask is a Guassian then the size refers to width at 3 standard devations on either side of the mean (i.e. sd=size/6) Texture Resolution [an integer (power of two)] Denes the size of the resolution of the texture for standard textures such as sin, sqr etc. For most cases a value of 256 pixels will sufce, but if stimuli are going to be very small then a lower resolution will use less memory. Units [deg, cm, pix, norm, or inherit from window] See Units for the window and stimuli See Also: API reference for PatchStim
Complete control over the display options is available as an advanced setting, customize_everything.
7.4.8 Properties
name [string] Everything in a PsychoPy experiment needs a unique name. The name should contain only letters, numbers and underscores (no puncuation marks or spaces). visualAnalogScale [checkbox] If this is checked, a line with no tick marks will be presented using the glow marker, and will return a rating from 0.00 to 1.00 (quasi-continuous). This is intended to bias people away from thinking in terms of numbers, and focus more on the visual bar when making their rating. This supercedes either choices or scaleDescription. choices [string] Instead of a numeric scale, you can present the subject with words or phrases to choose from. Enter all the words as a string. (Probably more than 6 or so will not look so great on the screen.) Spaces are assumed to separate the words. If there are any commas, the string will be interpreted as a list of words or phrases (possibly including spaces) that are separated by commas. scaleDescription : Brief instructions, reminding the subject how to interpret the numerical scale, default = 1 = not at all ... extremely = 7 low [str] The lowest number (bottom end of the scale), default = 1. If its not an integer, it will be converted to lowAnchorText (see Advanced). high [str] The highest number (top end of the scale), default = 7. If its not an integer, it will be converted to highAnchorText (see Advanced).
36
Chapter 7. Builder
7.4. Components
37
To indicate to PsychoPy that the value represents a variable or python code, rather than literal text, it should be preceded by a $. For example, inserting intensity into the text eld of the Text Component will cause that word literally to be presented, whereas $intensity will cause python to search for the variable called intensity in the script. Variables associated with Loops can also be entered in this way (see Accessing loop parameters from components for further details). But it can also be used to evaluate arbitrary python code. For example: $random(2) will generate a pair of random numbers $yn[randint()] will randomly choose the rst or second character (y or n) $globalClock.getTime() will insert the current time in secs of the globalClock object $[sin(angle), cos(angle)] will insert the sin and cos of an angle (e.g. into the x,y coords of a stimulus)
7.5.1 Settings
Screen: If multiple screens are available (and if the graphics card is not an intel integrated graphics chip) then the user can choose which screen they use (e.g. 1 or 2). Full-screen window: If this box is checked then the experiment window will ll the screen (overriding the window size setting and using the size that the screen is currently set to in the operating system settings). Window size: The size of the window in pixels, if this is not to be a full-screen window. Experiment Info This is a python dictionary object that stores information about the current experiment. This will be saved with any data les so can be used for storing information about the current run of the study. The information stored here can also be used within the experiment. For example, if the Experiment Info was {participant:jwp, ori:10} then Builder Components could access info[ori] to retrieve the orientation set here. Obviously this is a useful way to run essentially the same experiment, but with different conditions set at run time. Show info dlg If this box is checked then a dialog will appear at the beginning of the experiment allowing the Experiment Info to be changed. Monitor The name of the monitor calibration. Must match one of the monitor names from Monitor Center. Save log le A log le provides a record of what occurred during the experiment in chronological order, including information about any errors or warnings that may have occurred. Units The default units of the window (see Units for the window and stimuli). These can be overridden by individual Components.
38
Chapter 7. Builder
Logging level How much detail do you want to be output to the log le, if it is being saved. The lowest level is error, which only outputs error messages; warning outputs warnings and errors; info outputs all info, warnings and errors; debug outputs all info that can be logged. This system enables the user to get a great deal of information while generating their experiments, but then reducing this easily to just the critical information needed when actually running the study.
These les are designed to be used by experienced users with previous experience of python and, probably, matplotlib. The contents of the le can be explored with dir(), as any other python object. All should be self-explanatory. Ideally, we should provide a demo script here for fetching and plotting some data feel (free to contribute).
39
being used and were not fully developed. But they will be! (If you notice that this note is still here in several versions time then let Jon know to update the info!!!)
40
Chapter 7. Builder
CHAPTER
EIGHT
CODER
You can learn to use the scripting interface to PsychoPy in several ways, and you should probably follow a combination of them. These do not teach you about Python concepts, and you are recommended also to learn about that. In particular, dictionaries, lists and numpy arrays are used a great deal in most PsychoPy experiments: Basic Concepts: some of the logic of PsychoPy scripting Tutorials: walk you through the development of some semi-complete experiments demos: in the demos menu of Coder view. Many and varied use the Builder to compile a script and see how it works check the Reference Manual (API) for further details ultimately go into PsychoPy and start examining the source code. Its just regular python! Note: Before you start, tell PsychoPy about your monitor(s) using the Monitor Center. That way you get to use units (like degrees of visual angle) that will transfer easily to other computers.
41
Timing There are various ways to measure and control timing in PsychoPy: using frame refresh periods (most accurate, least obvious) checking the time on Clock objects using core.wait() commands (most obvious, least exible/accurate) Using core.wait(), as in the above example, is clear and intuitive in your script. But it cant be used while something is changing. For more exible timing, you could use a Clock() object from the core module:
from psychopy import visual, core #setup stimulus win=visual.Window([400,400]) gabor = visual.PatchStim(win, tex=sin, mask=gauss,sf=5, name=gabor) gabor.setAutoDraw(True)#automatically draw every frame gabor.autoLog=False#or well get many messages about phase change clock = core.Clock() #lets draw a stimulus for 2s, drifting for middle 0.5s while clock.getTime()<2.0:#clock times are in seconds if 0.5<=clock.getTime()<1.0: gabor.setPhase(0.1, +)#increment by 10th of cycle win.flip()#flip the screen every second
Clocks are accurate to around 1ms (better on some platforms), but using them to time stimuli is not very accurate because it fails to account for the fact that one frame on your monitor has a xed frame rate. In the above, the stimulus does not actually get drawn for exactly 0.5s (500ms). If the screen is refreshing at 60Hz (16.7ms per frame) and the getTime() call reports that the time has reached 1.999s, then the stimulus will draw again for a frame, in accordance with the if statement and will ultimately be displayed for 2.167s. Alternatively, if the time has reached 2.001s, there will not be an extra frame drawn. So using this method you get timing accurate to the nearest frame period but with little consistent precision. An error of 16.7ms might be acceptable to long-duration stimuli, but not to a brief presentation. It also might also give the false impression that a stimulus can be presented for any given period. At 60Hz refresh you can not present your stimulus for, say, 120ms; the frame period would limit you to a period of 116.7ms (7 frames) or 133.3ms (8 frames). As a result, the most precise way to control stimulus timing is to present them for a specied number of frames. The frame rate is extremely precise, much better than ms-precision. Calls to Window.ip() will be synchronised to the frame refresh; the script will not continue until the ip has occured. As a result, on most cards, as long as frames are not being dropped (see Detecting dropped frames) you can present stimuli for a xed, reproducible period. Note: Some graphics cards, such as Intel GMA graphics chips under win32, dont support frame sync. Avoid integrated graphics for experiment computers wherever possible. Using the concept of xed frame periods and ip() calls that sync to those periods we can time stimulus presentation extremely precisely with the following:
from psychopy import visual, core #setup stimulus win=visual.Window([400,400]) gabor = visual.PatchStim(win, tex=sin, mask=gauss,sf=5, name=gabor, autoLog=False) fixation = visual.PatchStim(win, tex=None, mask=gauss,sf=0, size=0.02, name=fixation, autoLog=False) clock = core.Clock()
42
Chapter 8. Coder
#lets draw a stimulus for 2s, drifting for middle 0.5s for frameN in range(200):#for exactly 200 frames if 10<=frameN<150:#present fixation for a subset of frames fixation.draw() if 50<=frameN<100:#present stim for a different subset gabor.setPhase(0.1, +)#increment by 10th of cycle gabor.draw() win.flip()#flip the screen every second
Using autoDraw Stimuli are typically drawn manually on every frame in which they are needed, using the draw() function. You can also set any stimulus to start drawing every frame using setAutoDraw(True) or setAutoDraw(False). If you use these commands on stimuli that also have autoLog==True, then these functions will also generate a log message on the frame when the rst drawing occurs and on the rst frame when it is conrmed to have ended.
Updating the logs For performance purposes log les are not actually written when the log commands are sent. They are stored in a stack and processed automatically when the script ends. You might also choose to force a ush of the logged messages manually during the experiment (e.g. during an inter-trial interval):
from psychopy import log ... log.flush()#write messages out to all targets
This should only be necessary if you want to see the logged information as the experiment progresses.
43
AutoLogging New in version 1.63.00 Certain events will log themselves automatically by default. For instance, visual stimuli send log messages every time one of their parameters is changed, and when autoDraw is toggled they send a message that the stimulus has started/stopped. All such log messages are timestamped with the frame ip on which they take effect. To avoid this logging, for stimuli such as xation points that might not be critical to your analyses, or for stimuli that change constantly and will ood the logging system with messages, the autoLogging can be turned on/off at initialisation of the stimulus and can be altered afterwards with .setAutoLog(True/False) Manual methods In addition to a variety of automatic logging messages, you can create your own, of various levels. These can be timestamped immediately:
from psychopy import log log.log(level=log.WARN, msg=something important) log.log(level=log.EXP, msg=something about the conditions) log.log(level=log.DATA, msg=something about a response) log.log(level=log.INFO, msg=something less important)
There are additional convenience functions for the above: log.warn(a warning) etc. For stimulus changes you probably want the log message to be timestamped based on the frame ip (when the stimulus is next presented) rather than the time that the log message is sent:
from psychopy import log, visual win = visual.Window([400,400]) win.flip() log.log(level=log.EXP, msg=sent immediately) log.logNextFlip(level=log.EXP, msg=sent on actual flip) win.flip()
Using a custom clock for logs New in version 1.63.00 By default times for log les are reported as seconds after the very beginning of the script (often it takes a few seconds to initialise and import all modules too). You can set the logging system to use any given core.Clock object (actually, anything with a getTime() method):
from psychopy import core, log globalClock=core.Clock() log.setDefaultClock(globalClock)
44
Chapter 8. Coder
8.2 Tutorials
8.2.1 Tutorial 1: Getting your rst stimulus
A tutorial to get you going with your rst stimulus display. Know your monitor PsychoPy has been designed to handle things like your screen calibrations for you. It is also designed to operate (if possible) in the nal experimental units that you like to use, like degrees of visual angle. In order to do this PsychoPy needs to know a little about your monitor. There is a GUI to help with this (select MonitorCenter from the tools menu of PsychoPyIDE or run ...site-packages/monitors/MonitorCenter.py). In the MonitorCenter window you can create a new monitor name, insert values that describe your monitor and run calibrations like gamma corrections. For now you can just stick to the [@testMonitor@] but give it correct values for your screen size in number of pixels and width in cm. Now, when you create a window on your monitor you can give it the name testMonitor and stimuli will know how they should be scaled appropriately. Your rst stimulus Building stimuli is extremely easy. All you need to do is create a Window, then some stimuli. Draw those stimuli, then update the window. PsychoPy has various other useful commands to help with timing too. Heres an example. Type it into a coder window, save it somewhere and press run.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
from psychopy import visual, core #import some libraries from PsychoPy mywin = visual.Window([800,600],monitor="testMonitor", units="deg") #create a window #create some stimuli grating = visual.PatchStim(win=mywin, mask="circle", size=3, pos=[-4,0], sf=3) fixation = visual.PatchStim(win=mywin, size=0.5, pos=[0,0], sf=0, rgb=-1) #draw the stimuli and update the window grating.draw() fixation.draw() mywin.update() #pause, so you get a chance to see it! core.wait(5.0)
Note: For those new to Python. Did you notice that the grating and the xation stimuli call the same PatchStim but had different arguments? One of the nice features about python is that you can give arguments with names and not use them all. PatchStim has over 15 arguments that can be set, but the others just take on default values if they arent needed. Thats a bit easy though. Lets make the stimulus move, at least! To do that we need to create a loop where we change the phase (or orientation, or position...) of the stimulus and then redraw. Add this code in place of the drawing code above:
for frameN in range(200): grating.setPhase(0.05, +)#advance phase by 0.05 of a cycle grating.draw()
8.2. Tutorials
45
fixation.draw() mywin.update()
That ran for 200 frames (and then waited 5 seconds as well). Maybe it would be nicer to keep updating until the user hits a key instead. Thats easy to add too. In the rst line add event to the list of modules youll import. Then replace the line:
for frameN in range(200)
Then, within the loop (make sure it has the same indentation as the other lines) add the lines:
if len(event.getKeys())>0: break event.clearEvents()
the rst line counts how many keys have been pressed since the last frame. If more than zero are found then we break out of the never-ending loop. The second line clears the event buffer and should always be called after youve collected the events you want (otherwise it gets full of events that we dont care about like the mouse moving around etc...). Your finished script should look something like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
from psychopy import visual, core, event #import some libraries from PsychoPy #create a window mywin = visual.Window([800,600],monitor="testMonitor", units="deg") #create some stimuli grating = visual.PatchStim(win=mywin, mask=circle, size=3, pos=[-4,0], sf=3) fixation = visual.PatchStim(win=mywin, size=0.2, pos=[0,0], sf=0, rgb=-1) #draw the stimuli and update the window while True: #this creates a never-ending loop grating.setPhase(0.05, +)#advance phase by 0.05 of a cycle grating.draw() fixation.draw() mywin.flip() if len(event.getKeys())>0: break event.clearEvents() #cleanup mywin.close() core.quit()
There are several more simple scripts like this in the demos menu of the Coder and Builder views and many more to download. If youre feeling like something bigger then go to Tutorial 2: Measuring a JND using a staircase procedure which will show you how to build an actual experiment.
46
Chapter 8. Coder
You can download the full code here. Note that the entire experiment is constructed of less than 100 lines of code, including the initial presentation of a dialogue for parameters, generation and presentation of stimuli, running the trials, saving data and outputing a simple summary analysis for feedback. Not bad, eh? Get info from the user The rst lines of code import the necessary libraries. We need lots of the psychopy components for a full experiment, as well as pythons time library (to get the current date) and numpy (which handles various numerical/mathematical functions):
from psychopy import core, visual, gui, data, misc, event import time, numpy, random
The try:...except:... lines allow us to try and load a parameter le from a previous run of the experiment. If that fails (e.g. because the experiment has never been run) then create a default set of parameters. These are easy to store in a python dictionary that well call expInfo:
try:#try to get a previous parameters file expInfo = misc.fromFile(lastParams.pickle) except:#if not there then use a default set expInfo = {observer:jwp, refOrientation:0} expInfo[dateStr]= data.getDateStr() #add the current time
The last line adds the current date to whichever method was used. So having loaded those parameters, lets allow the user to change them in a dialogue box (which well call dlg). This is the simplest form of dialogue, created directly from the dictionary above. the dialogue will be presented immediately to the user and the script will wait until they hit OK or Cancel. If they hit OK then dlg.OK=True, in which case well use the updated values and save them straight to a parameters le (the one we try to load above). If they hit Cancel then well simply quit the script and not save the values.
dlg = gui.DlgFromDict(expInfo, title=simple JND Exp, fixed=[dateStr]) if dlg.OK: misc.toFile(lastParams.pickle, expInfo)#save params to file for next time else: core.quit()#the user hit cancel so exit
Setup the information for trials Well create a le to which we can output some data as text during each trial (as well as outputting a binary le at the end of the experiment). Well create a lename from the subject+date+.csv (note how easy it is to concatenate strings in python just by adding them). csv les can be opened in most spreadsheet packages. Having opened a text le for writing, the last line shows how easy it is to send text to this target document.
fileName = expInfo[observer] + expInfo[dateStr] dataFile = open(fileName+.csv, w)#a simple text file with comma-separated-values dataFile.write(targetSide,oriIncrement,correct\n)
PsychoPy allows us to set up an object to handle the presentation of stimuli in a staircase procedure, the StairHandler. This will dene the increment of the orientation (ie. how far it is from the reference orientation). The staircase can be congured in many ways, but well set it up to begin with an increment of 20deg (very detectable) and home in on the 80% threshold value. Well step up our increment every time the subject gets a wrong answer and step down if they get three right answers in a row. The step size will also decrease after every 2 reversals, starting with an 8dB step (large) and going down to 1dB steps (smallish). Well nish after 50 trials.
8.2. Tutorials
47
staircase = data.StairHandler(startVal = 20.0, stepType = db, stepSizes=[8,4,4,2,2,1,1], nUp=1, nDown=3, #will home in on the 80% threshold nTrials=50)
Build your stimuli Now we need to create a window, some stimuli and timers. We need a ~psychopy.visual.Window in which to draw our stimuli, a xation point and two ~psychopy.visual.PatchStim stimuli (one for the target probe and one as the foil). We can have as many timers as we like and reset them at any time during the experiment, but I generally use one to measure the time since the experiment started and another that I reset at the beginning of each trial.
#create window and stimuli win = visual.Window([800,600],allowGUI=True, monitor=testMonitor, units=deg) foil = visual.PatchStim(win, sf=1, size=4, mask=gauss, ori=expInfo[refOrientation]) target = visual.PatchStim(win, sf=1, size=4, mask=gauss, ori=expInfo[refOrientation]) fixation = visual.PatchStim(win, color=-1, colorSpace=rgb, tex=None, mask=circle,size=0.2) #and some handy clocks to keep track of time globalClock = core.Clock() trialClock = core.Clock()
Once the stimuli are created we should give the subject a message asking if theyre ready. The next two lines create a pair of messages, then draw them into the screen and then update the screen to show what weve drawn. Finally we issue the command event.waitKeys() which will wait for a keypress before continuing.
message1 = visual.TextStim(win, pos=[0,+3],text=Hit a key when ready.) message2 = visual.TextStim(win, pos=[0,-3], text="Then press left or right to identify the %.1f deg probe." %expInfo[refOrientation]) message1.draw() message2.draw() fixation.draw() win.flip()#to show our newly drawn stimuli #pause until theres a keypress event.waitKeys()
Control the presentation of the stimuli OK, so we have everything that we need to run the experiment. The following uses a for-loop that will iterate over trials in the experiment. With each pass through the loop the staircase object will provide the new value for the intensity (which we will call thisIncrement). We will randomly choose a side to present the target stimulus using numpy.random.random(), setting the position of the target to be there and the foil to be on the other side of the xation point.
for thisIncrement in staircase: #will step through the staircase #set location of stimuli targetSide= random.choice([-1,1]) #will be either +1(right) or -1(left) foil.setPos([-5*targetSide, 0]) target.setPos([5*targetSide, 0]) #in other location
Then set the orientation of the foil to be the reference orientation plus thisIncrement, draw all the stimuli (including the xation point) and update the window.
#set orientation of probe foil.setOri(expInfo[refOrientation] + thisIncrement) #draw all stimuli
48
Chapter 8. Coder
Wait for presentation time of 500ms and then blank the screen (by updating the screen after drawing just the xation point).
core.wait(0.5) #wait 500ms; but use a loop of x frames for more accurate timing in fullscreen # eg, to get 30 frames: for f in xrange(30): win.flip() #blank screen fixation.draw() win.flip()
Get input from the subject Still within the for-loop (note the level of indentation is the same) we need to get the response from the subject. The method works by starting off assuming that there hasnt yet been a response and then waiting for a key press. For each key pressed we check if the answer was correct or incorrect and assign the response appropriately, which ends the trial. We always have to clear the event buffer if were checking for key presses like this
thisResp=None while thisResp==None: allKeys=event.waitKeys() for thisKey in allKeys: if thisKey==left: if targetSide==-1: thisResp = 1#correct else: thisResp = -1 #incorrect elif thisKey==right: if targetSide== 1: thisResp = 1#correct else: thisResp = -1 #incorrect elif thisKey in [q, escape]: core.quit() #abort experiment event.clearEvents() #must clear other (eg mouse) events - they clog the buffer
Now we must tell the staircase the result of this trial with its addData() method. Then it can work out whether the next trial is an increment or decrement. Also, on each trial (so still within the for-loop) we may as well save the data as a line of text in that .csv le we created earlier.
staircase.addData(thisResp) dataFile.write(%i,%.3f,%i\n %(targetSide, thisIncrement, thisResp))
Output your data and clean up OK! Were basically done! Weve reached the end of the for-loop (which occured because the staircase terminated) which means the trials are over. The next step is to close the text data le and also save the staircase as a binary le (by pickling the le in Python speak) which maintains a lot more info than we were saving in the text le.
#staircase has ended dataFile.close()
While were here, its quite nice to give some immediate feedback to the user. Lets tell them the the intensity values at the all the reversals and give them the mean of the last 6. This is an easy way to get an estimate of the threshold, but we might be able to do a better job by trying to reconstruct the psychometric function. To give that a try see the staircase analysis script of Tutorial 3.
8.2. Tutorials
49
win.close()
#This analysis script takes one or more staircase datafiles as input from a GUI #It then plots the staircases on top of each other on the left #and a combined psychometric function from the same data #on the right from psychopy import data, gui, misc, core import pylab files = gui.fileOpenDlg(.) if not files: core.quit() #get the data from all the files allIntensities, allResponses = [],[] for thisFileName in files: thisDat = misc.fromFile(thisFileName) allIntensities.append( thisDat.intensities ) allResponses.append( thisDat.data ) #plot each staircase pylab.subplot(121) colors = brgkcmbrgkcm lines, names = [],[] for fileN, thisStair in enumerate(allIntensities): #lines.extend(pylab.plot(thisStair)) #names = files[fileN] pylab.plot(thisStair, label=files[fileN]) #pylab.legend() #get combined data combinedInten, combinedResp, combinedN = \ data.functionFromStaircase(allIntensities, allResponses, 5)
50
Chapter 8. Coder
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
#fit curve fit = data.FitFunction(weibullTAFC,combinedInten, combinedResp, guess=[0.2, 0.5]) smoothInt = pylab.arange(min(combinedInten), max(combinedInten), 0.001) smoothResp = fit.eval(smoothInt) thresh = fit.inverse(0.8) print thresh #plot curve pylab.subplot(122) pylab.plot(smoothInt, smoothResp, -) pylab.plot([thresh, thresh],[0,0.8],--); pylab.plot([0, thresh],[0.8,0.8],--) pylab.title(threshold = %0.3f %(thresh)) #plot points pylab.plot(combinedInten, combinedResp, o) pylab.ylim([0,1]) pylab.show()
8.2. Tutorials
51
52
Chapter 8. Coder
CHAPTER
NINE
TROUBLESHOOTING
Regretably, PsychoPy is not bug-free. Running on all possible hardware and all platforms is a big ask. That said, a huge number of bugs have been resolved by the fact that there are literally 1000s of people using the software that have contributed either bug reports and/or xes. Below are some of the more common problems and their workarounds, as well as advice on how to get further help.
3. when you hit <return> you will hopefully get a moderately useful error message that you can Contribute to the Forum (mailing list) Mac users: 1. open the Console app (open spotlight and type console) 2. if there are a huge number of messages there you might nd it easiest to clear them (the brush icon) and then start PsychoPy again to generate a new set of messages
53
An error message may have been generated that was sent to output of the Coder view: 1. go to the Coder view (from the Builder>View menu if not visible) 2. if there is no Output panel at the bottom of the window, go to the View menu and select Output 3. try running your experiment again and see if an error message appears in this Output view If you still dont get an error message but the application still doesnt start then manually turn off the viewing of the Output (as below) and try the above again.
54
Chapter 9. Troubleshooting
The les are simple text, which you should be able to edit in any text editor. Particular changes that you might need to make: If the problem is that you have a corrupt experiment le or script that is trying and failing to load on startup, you could simply delete the appData.cfg le. Please also _contribForum a copy of the le that isnt working so that the underlying cause of the problem can be investigated (google rst to see if its a known issue).
55
56
Chapter 9. Troubleshooting
CHAPTER
TEN
RECIPES (HOW-TOS)
Below are various tips/tricks/recipes/how-tos for PsychoPy. They involve something that is a little more involved than you would nd in FAQs, but too specic for the manual as such (should they be there?).
To create your msg, insert the following into the start experiment section of the Code Component:
msg=doh!#if this comes up we forgot to update the msg!
and then insert the following into the Begin Routine section (this will get run every repeat of the routine):
if len(key_resp.keys)<1: msg="Failed to respond" elif resp.corr:#stored on last run routine msg="Correct! RT=%.3f" %(resp.rt) else: msg="Oops! That was wrong"
57
If you used the Store Correct feature of the Keyboard Component (and told psychopy what the correct answer was) you will also have a variable:
#numpy array storing whether each response was correct (1) or not (0) trials.data[resp.corr]
So, to create your msg, insert the following into the start experiment section of the Code Component:
msg=doh!#if this comes up we forgot to update the msg!
and then insert the following into the Begin Routine section (this will get run every repeat of the routine):
nCorr = trials.data[key_resp.corr].sum() #.std(), .mean() also available meanRt = trials.data[key_resp.rt].mean() msg = "You got %i trials correct (rt=%.2f)" %(nCorr,meanRt)
58
from psychopy import visual, core, data, event from numpy.random import shuffle import copy, time #from the std python libs #create some info to store with the data info={} info[startPoints]=[1.5,3,6] info[nTrials]=10 info[observer]=jwp win=visual.Window([400,400]) #--------------------#create the stimuli #--------------------#create staircases stairs=[] for thisStart in info[startPoints]: #we need a COPY of the info for each staircase #(or the changes here will be made to all the other staircases) thisInfo = copy.copy(info) #now add any specific info for this staircase thisInfo[thisStart]=thisStart #we might want to keep track of this thisStair = data.StairHandler(startVal=thisStart, extraInfo=thisInfo, nTrials=50, nUp=1, nDown=3, minVal = 0.5, maxVal=8, stepSizes=[4,4,2,2,1,1]) stairs.append(thisStair) for trialN in range(info[nTrials]): shuffle(stairs) #this shuffles in place (ie stairs itself is changed, nothing returned) #then loop through our randomised order of staircases for this repeat for thisStair in stairs: thisIntensity = thisStair.next() print start=%.2f, current=%.4f %(thisStair.extraInfo[thisStart], thisIntensity) #--------------------#run your trial and get an input #--------------------keys = event.waitKeys() #(we can simulate by pushing left for correct) if left in keys: wasCorrect=True else: wasCorrect = False thisStair.addData(wasCorrect) #so that the staircase adjusts itself #this trial (of all staircases) has finished #all trials finished
#save data (separate pickle and txt files for each staircase) dateStr = time.strftime("%b_%d_%H%M", time.localtime())#add the current time for thisStair in stairs: #create a filename based on the subject and start value filename = "%s start%.2f %s" %(thisStair.extraInfo[observer], thisStair.extraInfo[thisStart], thisStair.saveAsPickle(filename) thisStair.saveAsText(filename)
59
60
CHAPTER
ELEVEN
61
62
CHAPTER
TWELVE
63
64
CHAPTER
THIRTEEN
65
13.2.1 Window
class psychopy.visual.Window(size=(800, 600), pos=None, color=(0, 0, 0), colorSpace=rgb, rgb=None, dkl=None, lms=None, fullscr=None, allowGUI=None, monitor={}, bitsMode=None, winType=None, units=None, gamma=None, blendMode=avg, screen=0, viewScale=None, viewPos=None, viewOri=0.0, waitBlanking=True, allowStencil=False) Used to set up a context in which to draw objects, using either PyGame (pythons SDL binding) or pyglet. The pyglet backend allows multiple windows to be created, allows the user to specify which screen to use (if more than one is available, duh!) and allows movies to be rendered. Pygame has fewer bells and whistles, but does seem a little faster in text rendering. Pygame is used for all sound production and for monitoring the joystick. Parameters size [(800,600)] Size of the window in pixels (X,Y) pos [None or (x,y)] Location of the window on the screen rgb [[0,0,0]] Color of background as [r,g,b] list or single value. Each gun can take values betweeen -1 and 1 fullscr [None, True or False] Better timing can be achieved in full-screen mode allowGUI [None, True or False (if None prefs are used) ] If set to False, window will be drawn with no frame and no buttons to close etc... winType [None, pyglet, pygame] If None then PsychoPy will revert to user/site preferences monitor [None, string or a ~psychopy.monitors.Monitor object] The monitor to be used during the experiment units [None, height (of the window), norm (normalised),deg,cm,pix] Denes the default units of stimuli drawn in the window (can be overridden by each stimulus) See Units for the window and stimuli for explanation of options. screen [0, 1 (or higher if you have many screens)] Species the physical screen that stimuli will appear on (pyglet winType only) viewScale [None or [x,y]] Can be used to apply a custom scaling to the current units of the Window. viewPos [None, or [x,y]] If not None, redenes the origin for the window viewOri [0 or any numeric value] A single value determining the orientation of the view in degs waitBlanking [None, True or False. ] After a call to ip() should we wait for the blank before the script continues gamma [] Monitor gamma for linearisation (will use Bits++ if possible). Overrides monitor settings bitsMode [None, fast, (slow mode is deprecated). ] Denes how (and if) the Bits++ box will be used. fast updates every frame by drawing a hidden line on the top of the screen. allowStencil [True or False] When set to True, this allows operations that use the OpenGL stencil buffer (notably, allowing the class:~psychopy.visual.Aperture to be used). note Preferences. Some parameters (e.g. units) can now be given default values in the user/site preferences and these will be used if None is given here. If you do specify a value here it will take precedence over preferences.
66
clearBuffer() Clear the back buffer (to which you are currently drawing) without ipping the window. Useful if you want to generate movie sequences from the back buffer without actually taking the time to ip the window. close() Close the window (and reset the Bits++ if necess). flip(clearBuffer=True) Flip the front and back buffers after drawing everything for your frame. (This replaces the win.update() method, better reecting what is happening underneath). win.ip(clearBuffer=True)#results in a clear screen after ipping win.ip(clearBuffer=False)#the screen is not cleared (so represent the previous screen) fps() Report the frames per second since the last call to this function (or since the window was created if this is rst call) getMovieFrame(buffer=front) Capture the current Window as an image. This can be done at any time (usually after a .update() command). Frames are stored in memory until a .saveMovieFrames(lename) command is issued. You can issue getMovieFrame() as often as you like and then save them all in one go when nished. logOnFlip(msg, level, obj=None) Send a log message that should be time-stamped at the next .ip() command. Parameters msg: the message to be logged level: the level of importance for the message obj (optional): the python object that might be associated with this message if desired saveFrameIntervals(leName=None, clear=True) Save recorded screen frame intervals to disk, as comma-separated values. Parameters leName [None or the lename (including path if necessary) in which to store the data.] If None then lastFrameIntervals.log will be used. saveMovieFrames(leName, mpgCodec=mpeg1video, fps=30, clearFrames=True) Writes any captured frames to disk. Will write any format that is understood by PIL (tif, jpg, bmp, png...) Parameters lename: name of le, including path (required) The extension at the end of the le determines the type of le(s) created. If an image type is given the multiple static frames are created. If it is .gif then an animated GIF image is created (although you will get higher quality GIF by saving PNG les and then combining them in dedicated image manipulation software (e.g. GIMP). On windows and linux .mpeg les can be created if pymedia is installed. On OS X .mov les can be created if the pyobjc-frameworks-QTKit is installed. mpgCodec: the code to be used by pymedia if the lename ends in .mpg fps: the frame rate to be used throughout the movie only for quicktime (.mov) movies clearFrames: set this to False if you want the frames to be kept for additional calls to saveMovieFrames
67
Examples:
myWin.saveMovieFrames(frame.tif)#writes a series of static frames as frame001.tif, fra myWin.saveMovieFrames(stimuli.mov, fps=25)#on OS X only myWin.saveMovieFrames(stimuli.gif)#not great quality animated gif myWin.saveMovieFrames(stimuli.mpg)#not on OS X
setColor(color, colorSpace=None, operation=) Set the color of the window. NB This command sets the color that the blank screen will have on the next clear operation. As a result it effectively takes TWO ip() operations to become visible (the rst uses the color to create the new screen the second presents that screen to the viewer). See Color spaces for further information about the ways to specify colors and their various implications. Parameters color [] Can be specied in one of many ways. If a string is given then it is interpreted as the name of the color. Any of the standard html/X11 color names <http://www.w3schools.com/html/html_colornames.asp> can be used. e.g.:
myStim.setColor(white) myStim.setColor(RoyalBlue)#(the case is actually ignored)
A hex value can be provided, also formatted as with web colors. This can be provided as a string that begins with # (not using pythons usual 0x000000 format):
myStim.setColor(#DDA0DD)#DDA0DD is hexadecimal for plum
You can also provide a triplet of values, which refer to the coordinates in one of the Color spaces. If no color space is specied then the color space most recently used for this stimulus is used again. myStim.setColor([1.0,-1.0,-1.0], rgb)#a red color in rgb space myStim.setColor([0.0,45.0,1.0], dkl) #DKL space with elev=0, azimuth=45 myStim.setColor([0,0,255], rgb255) #a blue stimulus using rgb255 space Lastly, a single number can be provided, x, which is equivalent to providing [x,x,x]. myStim.setColor(255, rgb255) #all guns o max colorSpace : string or None dening which of the Color spaces to use. For strings and hex values this is not needed. If None the default colorSpace for the stimulus is used (dened during initialisation). operation : one of +,-,*,/, or for no operation (simply replace value) for colors specied as a triplet of values (or single intensity value) the new value will perform this operation on the previous color thisStim.setColor([1,1,1],rgb255,+)#increment all guns by 1 value thisStim.setColor(-1, rgb, *) #multiply the color by -1 (which in this space inverts the contrast) thisStim.setColor([10,0,0], dkl, +)#raise the elevation from the isoluminant plane by 10 deg setGamma(gamma) Set the monitor gamma, using Bits++ if possible
68
setMouseVisible(visibility) Sets the visibility of the mouse cursor. If Window was initilised with noGUI=True then the mouse is initially set to invisible, otherwise it will initially be visible. Usage: setMouseVisible(False) setMouseVisible(True) setRGB(newRGB) Deprecated: As of v1.61.00 please use setColor() instead setRecordFrameIntervals(value=True) To provide accurate measures of frame intervals, to determine whether frames are being dropped. Set this to False while the screen is not being updated e.g. during event.waitkeys() and set to True during critical parts of the script see also: Window.saveFrameIntervals() setScale(units, font=dummyFont, prevScale=(1.0, 1.0)) This method is called from within the draw routine and sets the scale of the OpenGL context to map between units. Could potentially be called by the user in order to draw OpenGl objects manually in each frame. The units can be height (multiples of window height), norm(normalised), pix(pixels), cm or stroke_font. The font parameter is only used if units=stroke_font
An PatchStim can be rotated scaled and shifted in position, its texture can be drifted in X and/or Y and it can have a spatial frequency in X and/or Y (for an image le that simply draws multiple copies in the patch). Also since transparency can be controlled two PatchStims can combine e.g. to form a plaid. Using Patchstim with images from disk (jpg, tif, png...) Ideally images to be rendered should be square with power-of-2 dimensions e.g. 16x16, 128x128. Any image that is not will be upscaled (with linear interp) to the nearest such texture by PsychoPy. The size of the stimulus should be specied in the normal way using the appropriate units (deg, pix, cm...). Be sure to get the aspect ratio the same as the image (if you dont want it stretched!). Why cant I have a normal image, drawn pixel-by-pixel? PatchStims are rendered using OpenGL textures. This is more powerful than using simple screen blitting - it allows the rotation, masking, transparency to work. 13.2. psychopy.visual - many visual stimuli 69
Parameters win [] a Window object (required) tex [] The texture forming the image sin,sqr, saw, tri, None or the name of an image le (most formats supported) or a numpy array (1xN or NxN) ranging -1:1 mask : The alpha mask (forming the shape of the image) None, circle, gauss or the name of an image le (most formats supported) or a numpy array (1xN or NxN) ranging -1:1 units [None, norm, cm, deg or pix ] If None then the current units of the Window will be used. See Units for the window and stimuli for explanation of other options. pos : a tuple (0.0,0.0) or a list [0.0,0.0] for the x and y of the centre of the stimulus. The origin is the screen centre, the units are determined by units (see above). Stimuli can be position beyond the window! size : a tuple (0.5,0.5) or a list [0.5,0.5] for the x and y OR a single value (which will be applied to x and y). Units are specied by units (see above). Sizes can be negative and can extend beyond the window. Note: If the mask is Gaussian (gauss), then the size parameter refers to the stimulus at 3 standard deviations on each side of the centre (ie. sd=size/6) sf: a tuple (1.0,1.0) or a list [1.0,1.0] for the x and y OR a single value (which will be applied to x and y). Where units == deg or cm units are in cycles per deg/cm. If units == norm then sf units are in cycles per stimulus (so scale with stimulus size). If texture is an image loaded from a le then sf defaults to 1/stim size to give one cycle of the image. ori: orientation of stimulus in degrees phase: a tuple (0.0,0.0) or a list [0.0,0.0] for the x and y OR a single value (which will be applied to x and y). Phase of the stimulus in each direction. NB phase has modulus 1 (rather than 360 or 2*pi) This is a little unconventional but has the nice effect that setting phase=t*n drifts a stimulus at n Hz texRes: resolution of the texture (if not loading from an image le) color: Could be a: web name for a color (e.g. FireBrick); hex value (e.g. #FF0047); tuple (1.0,1.0,1.0); list [1.0,1.0, 1.0]; or numpy array. If the last three are used then the color space should also be given See Color spaces colorSpace: the color space controlling the interpretation of the color See Color spaces contrast: How far the stimulus deviates from the middle grey. Contrast can vary -1:1 (this is a multiplier for the values given in the color description of the stimulus). opacity: 1.0 is opaque, 0.0 is transparent
70
depth: The depth argument is deprecated and may be removed in future versions. Depth is controlled simply by drawing order. name [string] The name of the object to be using during logged messages about this stim clearTextures() Clear the textures associated with the given stimulus. As of v1.61.00 this is called automatically during garbage collection of your stimulus, so doesnt need calling explicitly by the user. draw(win=None) Draw the stimulus in its relevant window. You must call this method after every MyWin.ip() if you want the stimulus to appear on that frame and then update the screen again. setAutoDraw(val) Add or remove a stimulus from the list of stimuli that will be automatically drawn on each ip Parameters val: True/False True to add the stimulus to the draw list, False to remove it setAutoLog(val=True) Turn on (or off) autoLogging for this stimulus. Parameters val: True (default) or False setColor(color, colorSpace=None, operation=) Set the color of the stimulus. See Color spaces for further information about the various ways to specify colors and their various implications. Parameters color [] Can be specied in one of many ways. If a string is given then it is interpreted as the name of the color. Any of the standard html/X11 color names <http://www.w3schools.com/html/html_colornames.asp> can be used. e.g.:
myStim.setColor(white) myStim.setColor(RoyalBlue)#(the case is actually ignored)
A hex value can be provided, also formatted as with web colors. This can be provided as a string that begins with # (not using pythons usual 0x000000 format):
myStim.setColor(#DDA0DD)#DDA0DD is hexadecimal for plum
You can also provide a triplet of values, which refer to the coordinates in one of the Color spaces. If no color space is specied then the color space most recently used for this stimulus is used again. myStim.setColor([1.0,-1.0,-1.0], rgb)#a red color in rgb space myStim.setColor([0.0,45.0,1.0], dkl) #DKL space with elev=0, azimuth=45 myStim.setColor([0,0,255], rgb255) #a blue stimulus using rgb255 space Lastly, a single number can be provided, x, which is equivalent to providing [x,x,x]. myStim.setColor(255, rgb255) #all guns o max colorSpace : string or None dening which of the Color spaces to use. For strings and hex values this is not needed. If None the default colorSpace for the stimulus is used (dened during initialisation). operation : one of +,-,*,/, or for no operation (simply replace value) 13.2. psychopy.visual - many visual stimuli 71
for colors specied as a triplet of values (or single intensity value) the new value will perform this operation on the previous color thisStim.setColor([1,1,1],rgb255,+)#increment all guns by 1 value thisStim.setColor(-1, rgb, *) #multiply the color by -1 (which in this space inverts the contrast) thisStim.setColor([10,0,0], dkl, +)#raise the elevation from the isoluminant plane by 10 deg setContr(newContr, operation=) Set the contrast of the stimulus setContrast(value, operation=) setDKL(newDKL, operation=) DEPRECATED since v1.60.05: Please use setColor setDepth(newDepth, operation=) setLMS(newLMS, operation=) DEPRECATED since v1.60.05: Please use setColor setMask(value) setOpacity(newOpacity, operation=) setOri(newOri, operation=) Set the stimulus orientation in degrees setPhase(value, operation=) setPos(newPos, operation=, units=None) Set the stimulus position in the specied (or inheritted) units setRGB(newRGB, operation=) DEPRECATED since v1.60.05: Please use setColor setSF(value, operation=) setSize(newSize, operation=, units=None) Set the stimulus size [X,Y] in the specied (or inheritted) units setTex(value) setUseShaders(val=True) Set this stimulus to use shaders if possible.
Parameters win : a Window object (required) image : The lename, including relative or absolute path. The image can be any format that the Python Imagin Library can import (which is almost all). units [None, height, norm, cm, deg or pix ] If None then the current units of the Window will be used. See Units for the window and stimuli for explanation of other options. pos [] a tuple (0.0,0.0) or a list [0.0,0.0] for the x and y of the centre of the stimulus. The origin is the screen centre, the units are determined by units (see above). Stimuli can be position beyond the window! contrast : How far the stimulus deviates from the middle grey. Contrast can vary -1:1 (this is a multiplier for the values given in the color description of the stimulus) opacity : 1.0 is opaque, 0.0 is transparent name [string] The name of the object to be using during logged messages about this stim draw(win=None) Draw the stimulus in its relevant window. You must call this method after every MyWin.ip() if you want the stimulus to appear on that frame and then update the screen again. setDepth(newDepth, operation=) setFlipHoriz(newVal=True) If set to True then the image will be ipped horiztonally (left-to-right). Note that this is relative to the original image, not relative to the current state. setFlipVert(newVal=True) If set to True then the image will be ipped vertically (top-to-bottom). Note that this is relative to the original image, not relative to the current state. setImage(lename=None) Set the image to be drawn. Parameters lename: The lename, including relative or absolute path if necessary. Can actually also be an image loaded by PIL. setPos(newPos, operation=, units=None) setUseShaders(val=True) Set this stimulus to use shaders if possible.
13.2.4 TextStim
class psychopy.visual.TextStim(win, text=Hello World, font=, pos=(0.0, 0.0), depth=0, rgb=None, color=(1.0, 1.0, 1.0), colorSpace=rgb, opacity=1.0, units=, ori=0.0, height=None, antialias=True, bold=False, italic=False, alignHoriz=center, alignVert=center, fontFiles=[ ], wrapWidth=None, name=, autoLog=True) Class of text stimuli to be displayed in a Window Parameters win: A Window object. Required - the stimulus must know where to draw itself
73
text: The text to be rendered pos: Position on the screen color: Could be a: web name for a color (e.g. FireBrick); hex value (e.g. #FF0047); tuple (1.0,1.0,1.0); list [1.0,1.0, 1.0]; or numpy array. If the last three are used then the color space should also be given See Color spaces colorSpace: the color space controlling the interpretation of the color See Color spaces opacity: How transparent the object will be (0 for transparent, 1 for opaque) units [None, height, norm, cm, deg or pix ] If None then the current units of the Window will be used. See Units for the window and stimuli for explanation of other options. ori: Orientation of the text height: Height of the characters (including the ascent of the letter and the descent) antialias: boolean to allow (or not) antialiasing the text bold: Make the text bold (better to use a bold font name) italic: Make the text italic (better to use an actual italic font) alignHoriz: The horizontal alignment (left, right or center) alignVert: The vertical alignment (top, bottom or center) fontFiles: A list of additional les if the font is not in the standard system location (include the full path) wrapWidth: The width the text should run before wrapping name [string] The name of the object to be using during logged messages about this stim depth: The depth argument is deprecated and may be removed in future versions. Depth is controlled simply by drawing order. draw(win=None) Draw the stimulus in its relevant window. You must call this method after every MyWin.ip() if you want the stimulus to appear on that frame and then update the screen again. If win is specied then override the normal window of this stimulus. setAutoDraw(val) Add or remove a stimulus from the list of stimuli that will be automatically drawn on each ip Parameters val: True/False True to add the stimulus to the draw list, False to remove it setAutoLog(val=True) Turn on (or off) autoLogging for this stimulus. Parameters val: True (default) or False
74
setColor(color, colorSpace=None, operation=) Set the color of the stimulus. See Color spaces for further information about the various ways to specify colors and their various implications. Parameters color : Can be specied in one of many ways. If a string is given then it is interpreted as the name of the color. Any of the standard html/X11 color names <http://www.w3schools.com/html/html_colornames.asp> can be used. e.g.:
myStim.setColor(white) myStim.setColor(RoyalBlue)#(the case is actually ignored)
A hex value can be provided, also formatted as with web colors. This can be provided as a string that begins with # (not using pythons usual 0x000000 format):
myStim.setColor(#DDA0DD)#DDA0DD is hexadecimal for plum
You can also provide a triplet of values, which refer to the coordinates in one of the Color spaces. If no color space is specied then the color space most recently used for this stimulus is used again. myStim.setColor([1.0,-1.0,-1.0], rgb)#a red color in rgb space myStim.setColor([0.0,45.0,1.0], dkl) #DKL space with elev=0, azimuth=45 myStim.setColor([0,0,255], rgb255) #a blue stimulus using rgb255 space Lastly, a single number can be provided, x, which is equivalent to providing [x,x,x]. myStim.setColor(255, rgb255) #all guns o max colorSpace : string or None dening which of the Color spaces to use. For strings and hex values this is not needed. If None the default colorSpace for the stimulus is used (dened during initialisation). operation : one of +,-,*,/, or for no operation (simply replace value) for colors specied as a triplet of values (or single intensity value) the new value will perform this operation on the previous color thisStim.setColor([1,1,1],rgb255,+)#increment all guns by 1 value thisStim.setColor(-1, rgb, *) #multiply the color by -1 (which in this space inverts the contrast) thisStim.setColor([10,0,0], dkl, +)#raise the elevation from the isoluminant plane by 10 deg setContr(newContr, operation=) Set the contrast of the stimulus setDKL(newDKL, operation=) DEPRECATED since v1.60.05: Please use setColor setDepth(newDepth, operation=) setFont(font) Set the font to be used for text rendering. font should be a string specifying the name of the font (in system resources) setHeight(height) Set the height of the letters (including the entire box that surrounds the letters in the font). The width of the letters is then dened by the font.
75
setLMS(newLMS, operation=) DEPRECATED since v1.60.05: Please use setColor setOpacity(newOpacity, operation=) setOri(newOri, operation=) Set the stimulus orientation in degrees setPos(newPos, operation=, units=None) Set the stimulus position in the specied (or inheritted) units setRGB(value, operation=) setSize(newSize, operation=, units=None) Set the stimulus size [X,Y] in the specied (or inheritted) units setText(value=None) Set the text to be rendered using the current font setUseShaders(val=True) Set this stimulus to use shaders if possible.
13.2.5 MovieStim
class psychopy.visual.MovieStim(win, lename=, units=pix, size=None, pos=(0.0, 0.0), ori=0.0, ipVert=False, ipHoriz=False, opacity=1.0, name=, autoLog=True) A stimulus class for playing movies (mpeg, avi, etc...) in PsychoPy. examples:
mov = visual.MovieStim(myWin, testMovie.mp4, fliVert=False) print mov.duration print mov.format.width, mov.format.height #give the original size of the movie in pixels mov.draw() #draw the current frame (automagically determined)
See MovieStim.py for demo. Parameters win : a Window object (required) lename : a string giving the relative or absolute path to the movie. Can be any movie that AVbin can read (e.g. mpeg, DivX) units [None, height, norm, cm, deg or pix ] If None then the current units of the Window will be used. See Units for the window and stimuli for explanation of other options. pos : position of the centre of the movie, given in the units specied ipVert [True or False] If True then the movie will be top-bottom ipped ipHoriz [True or False] If True then the movie will be right-left ipped ori : Orientation of the stimulus in degrees size : Size of the stimulus in units given. If not specied then the movie will take its original dimensions. opacity : the movie can be made transparent by reducing this
76
name [string] The name of the object to be using during logged messages about this stim draw(win=None) Draw the current frame to a particular visual.Window (or to the default win for this object if not specied). The current position in the movie will be determined automatically. This method should be called on every frame that the movie is meant to appear loadMovie(lename) Load a movie from le Parameters lename: string The name of the le, including path if necessary Brings up a warning if avbin is not found on the computer. After the le is loaded MovieStim.duration is updated with the movie duration (in seconds). pause() Pause the current point in the movie (sound will stop, current frame will not advance) play() Continue a paused movie from current position seek(timestamp) Seek to a particular timestamp in the movie. NB this does not seem very robust as at version 1.62 and may cause crashes! setAutoDraw(val) Add or remove a stimulus from the list of stimuli that will be automatically drawn on each ip Parameters val: True/False True to add the stimulus to the draw list, False to remove it setAutoLog(val=True) Turn on (or off) autoLogging for this stimulus. Parameters val: True (default) or False setColor(color, colorSpace=None, operation=) Set the color of the stimulus. See Color spaces for further information about the various ways to specify colors and their various implications. Parameters color [] Can be specied in one of many ways. If a string is given then it is interpreted as the name of the color. Any of the standard html/X11 color names <http://www.w3schools.com/html/html_colornames.asp> can be used. e.g.:
myStim.setColor(white) myStim.setColor(RoyalBlue)#(the case is actually ignored)
A hex value can be provided, also formatted as with web colors. This can be provided as a string that begins with # (not using pythons usual 0x000000 format):
myStim.setColor(#DDA0DD)#DDA0DD is hexadecimal for plum
You can also provide a triplet of values, which refer to the coordinates in one of the Color spaces. If no color space is specied then the color space most recently used for this stimulus is used again.
77
myStim.setColor([1.0,-1.0,-1.0], rgb)#a red color in rgb space myStim.setColor([0.0,45.0,1.0], dkl) #DKL space with elev=0, azimuth=45 myStim.setColor([0,0,255], rgb255) #a blue stimulus using rgb255 space Lastly, a single number can be provided, x, which is equivalent to providing [x,x,x]. myStim.setColor(255, rgb255) #all guns o max colorSpace : string or None dening which of the Color spaces to use. For strings and hex values this is not needed. If None the default colorSpace for the stimulus is used (dened during initialisation). operation : one of +,-,*,/, or for no operation (simply replace value) for colors specied as a triplet of values (or single intensity value) the new value will perform this operation on the previous color thisStim.setColor([1,1,1],rgb255,+)#increment all guns by 1 value thisStim.setColor(-1, rgb, *) #multiply the color by -1 (which in this space inverts the contrast) thisStim.setColor([10,0,0], dkl, +)#raise the elevation from the isoluminant plane by 10 deg setContr(newContr, operation=) Set the contrast of the stimulus setDKL(newDKL, operation=) DEPRECATED since v1.60.05: Please use setColor setDepth(newDepth, operation=) setLMS(newLMS, operation=) DEPRECATED since v1.60.05: Please use setColor setMovie(lename) See ~MovieStim.loadMovie (the functions are identical). This form is provided for syntactic consistency with other visual stimuli. setOpacity(newOpacity, operation=) Sets the opacity of the movie to newOpacity Over-rides _BaseVisualStim.setOpacity setOri(newOri, operation=) Set the stimulus orientation in degrees setPos(newPos, operation=, units=None) Set the stimulus position in the specied (or inheritted) units setRGB(newRGB, operation=) DEPRECATED since v1.60.05: Please use setColor setSize(newSize, operation=, units=None) Set the stimulus size [X,Y] in the specied (or inheritted) units setUseShaders(val=True) Set this stimulus to use shaders if possible.
78
13.2.6 ShapeStim
class psychopy.visual.ShapeStim(win, units=, lineWidth=1.0, lineColor=(1.0, 1.0, 1.0), lineColorSpace=rgb, llColor=None, llColorSpace=rgb, vertices=((-0.5, 0), (0, 0.5), (0.5, 0)), closeShape=True, pos=(0, 0), ori=0.0, opacity=1.0, depth=0, interpolate=True, lineRGB=None, llRGB=None, name=, autoLog=True) Create geometric (vector) shapes by dening vertex locations. Shapes can be outlines or lled, by setting lineRGB and llRGB to rgb triplets, or None. They can also be rotated (stim.setOri(__)) and translated (stim.setPos(__)) like any other stimulus. NB for now the ll of objects is performed using glBegin(GL_POLYGON) and that is limited to convex shapes. With concavities you get unpredictable results (e.g. add a ll color to the arrow stim below). To create concavities, you can combine multiple shapes, or stick to just outlines. (If anyone wants to rewrite ShapeStim to use glu tesselators that would be great!) Parameters win : A Window object (required) units [None, norm, cm, deg or pix ] If None then the current units of the Window will be used. See Units for the window and stimuli for explanation of other options. lineColor : Could be a: web name for a color (e.g. FireBrick); hex value (e.g. #FF0047); tuple (1.0,1.0,1.0); list [1.0,1.0, 1.0]; or numpy array. If the last three are used then the color space should also be given See Color spaces lineColorSpace: The color space controlling the interpretation of the lineColor. See Color spaces llColor : Could be a: web name for a color (e.g. FireBrick); hex value (e.g. #FF0047); tuple (1.0,1.0,1.0); list [1.0,1.0, 1.0]; or numpy array. If the last three are used then the color space should also be given See Color spaces lineWidth [int (or oat?) ] specifying the line width in pixels vertices [a list of lists or a numpy array (Nx2) ] specifying xy positions of each vertex closeShape [True or False] Do you want the last vertex to be automatically connected to the rst? pos [tuple, list or 2x1 array] the position of the anchor for the stimulus (relative to which the vertices are drawn) ori [oat or int] the shape can be rotated around the anchor opacity [oat] 1.0 is opaque, 0.0 is transparent
79
depth: The depth argument is deprecated and may be removed in future versions. Depth is controlled simply by drawing order. interpolate [True or False] If True the edge of the line will be antialiased. name [string] The name of the object to be using during logged messages about this stim draw(win=None) Draw the stimulus in its relevant window. You must call this method after every MyWin.ip() if you want the stimulus to appear on that frame and then update the screen again. setAutoDraw(val) Add or remove a stimulus from the list of stimuli that will be automatically drawn on each ip Parameters val: True/False True to add the stimulus to the draw list, False to remove it setAutoLog(val=True) Turn on (or off) autoLogging for this stimulus. Parameters val: True (default) or False setColor(color, colorSpace=None, operation=) For ShapeStim use setLineColor() or setFillColor() setContr(newContr, operation=) Set the contrast of the stimulus setDKL(newDKL, operation=) DEPRECATED since v1.60.05: Please use setColor setDepth(newDepth, operation=) setFillColor(color, colorSpace=None, operation=) Sets the color of the shape ll. See PatchStim.setColor() for further details of how to use this function. Note that shapes where some vertices point inwards will usually not ll correctly. setFillRGB(value, operation=) DEPRECATED since v1.60.05: Please use setFillColor() setLMS(newLMS, operation=) DEPRECATED since v1.60.05: Please use setColor setLineColor(color, colorSpace=None, operation=) Sets the color of the shape edge. See PatchStim.setColor() for further details of how to use this function. setLineRGB(value, operation=) DEPRECATED since v1.60.05: Please use setLineColor() setOpacity(newOpacity, operation=) setOri(newOri, operation=) Set the stimulus orientation in degrees setPos(newPos, operation=, units=None) Set the stimulus position in the specied (or inheritted) units setRGB(newRGB, operation=) DEPRECATED since v1.60.05: Please use setColor
80
setSize(newSize, operation=, units=None) Set the stimulus size [X,Y] in the specied (or inheritted) units setUseShaders(val=True) Set this stimulus to use shaders if possible. setVertices(value=None, operation=) Set the xy values of the vertices (relative to the centre of the eld). Values should be: an array/list of Nx2 coordinates.
13.2.7 RadialStim
class psychopy.visual.RadialStim(win, tex=sqrXsqr, mask=none, units=, pos=(0.0, 0.0), size=(1.0, 1.0), radialCycles=3, angularCycles=4, radialPhase=0, angularPhase=0, ori=0.0, texRes=64, angularRes=100, visibleWedge=(0, 360), rgb=None, color=(1.0, 1.0, 1.0), colorSpace=rgb, dkl=None, lms=None, contrast=1.0, opacity=1.0, depth=0, rgbPedestal=(0.0, 0.0, 0.0), interpolate=False, name=, autoLog=True) Stimulus object for drawing radial stimuli, like an annulus, a rotating wedge, a checkerboard etc... Ideal for fMRI retinotopy stimuli! Many of the capabilities are built on top of the PatchStim. This stimulus is still relatively new and Im nding occasional gliches. it also takes longer to draw than a typical PatchStim, so not recommended for tasks where high frame rates are needed. Parameters win : a Window object (required) tex : The texture forming the image sqrXsqr, sinXsin, sin,sqr,None or the name of an image le (most formats supported) or a numpy array (1xN, NxNx1, NxNx3) ranging -1:1 mask : Unlike the mask in the PatchStim, this is a 1-D mask dictating the behaviour from the centre of the stimulus to the surround. units [None, norm, cm, deg or pix ] If None then the current units of the Window will be used. See Units for the window and stimuli for explanation of other options. pos : a tuple (0.0,0.0) or a list [0.0,0.0] for the x and y of the centre of the stimulus. Stimuli can be position beyond the window! size : a tuple (0.5,0.5) or a list [0.5,0.5] for the x and y OR a single value (which will be applied to x and y). Sizes can be negative and stimuli can extend beyond the window. ori [] orientation of stimulus in degrees. texRes [(default= 128 )] resolution of the texture (if not loading from an image le) angularRes [(default= 100 )] 100, the number of triangles used to make the sti radialPhase : the phase of the texture from the centre to the perimeter of the stimulus angularPhase [] the phase of the texture around the stimulus 13.2. psychopy.visual - many visual stimuli 81
color: Could be a: web name for a color (e.g. FireBrick); hex value (e.g. #FF0047); tuple (1.0,1.0,1.0); list [1.0,1.0, 1.0]; or numpy array. If the last three are used then the color space should also be given See Color spaces colorSpace: the color space controlling the interpretation of the color See Color spaces contrast [(default= 1.0 )] How far the stimulus deviates from the middle grey. Contrast can vary -1:1 (this is a multiplier for the values given in the color description of the stimulus) opacity : 1.0 is opaque, 0.0 is transparent depth: The depth argument is deprecated and may be removed in future versions. Depth is controlled simply by drawing order. name [string] The name of the object to be using during logged messages about this stim clearTextures() Clear the textures associated with the given stimulus. As of v1.61.00 this is called automatically during garbage collection of your stimulus, so doesnt need calling explicitly by the user. draw(win=None) Draw the stimulus in its relevant window. You must call this method after every MyWin.ip() if you want the stimulus to appear on that frame and then update the screen again. If win is specied then override the normal window of this stimulus. setAngularCycles(value, operation=) set the number of cycles going around the stimulus setAngularPhase(value, operation=) set the angular phase of the texture setAutoDraw(val) Add or remove a stimulus from the list of stimuli that will be automatically drawn on each ip Parameters val: True/False True to add the stimulus to the draw list, False to remove it setAutoLog(val=True) Turn on (or off) autoLogging for this stimulus. Parameters val: True (default) or False setColor(color, colorSpace=None, operation=) Set the color of the stimulus. See Color spaces for further information about the various ways to specify colors and their various implications. Parameters color [] Can be specied in one of many ways. If a string is given then it is interpreted as the name of the color. Any of the standard html/X11 color names <http://www.w3schools.com/html/html_colornames.asp> can be used. e.g.:
82
A hex value can be provided, also formatted as with web colors. This can be provided as a string that begins with # (not using pythons usual 0x000000 format):
myStim.setColor(#DDA0DD)#DDA0DD is hexadecimal for plum
You can also provide a triplet of values, which refer to the coordinates in one of the Color spaces. If no color space is specied then the color space most recently used for this stimulus is used again. myStim.setColor([1.0,-1.0,-1.0], rgb)#a red color in rgb space myStim.setColor([0.0,45.0,1.0], dkl) #DKL space with elev=0, azimuth=45 myStim.setColor([0,0,255], rgb255) #a blue stimulus using rgb255 space Lastly, a single number can be provided, x, which is equivalent to providing [x,x,x]. myStim.setColor(255, rgb255) #all guns o max colorSpace : string or None dening which of the Color spaces to use. For strings and hex values this is not needed. If None the default colorSpace for the stimulus is used (dened during initialisation). operation : one of +,-,*,/, or for no operation (simply replace value) for colors specied as a triplet of values (or single intensity value) the new value will perform this operation on the previous color thisStim.setColor([1,1,1],rgb255,+)#increment all guns by 1 value thisStim.setColor(-1, rgb, *) #multiply the color by -1 (which in this space inverts the contrast) thisStim.setColor([10,0,0], dkl, +)#raise the elevation from the isoluminant plane by 10 deg setContr(newContr, operation=) Set the contrast of the stimulus setContrast(value, operation=) setDKL(newDKL, operation=) DEPRECATED since v1.60.05: Please use setColor setDepth(newDepth, operation=) setLMS(newLMS, operation=) DEPRECATED since v1.60.05: Please use setColor setMask(value) setOpacity(newOpacity, operation=) setOri(newOri, operation=) Set the stimulus orientation in degrees setPhase(value, operation=) setPos(newPos, operation=, units=None) Set the stimulus position in the specied (or inheritted) units setRGB(newRGB, operation=) DEPRECATED since v1.60.05: Please use setColor
83
setRadialCycles(value, operation=) set the number of texture cycles from centre to periphery setRadialPhase(value, operation=) set the radial phase of the texture setSF(value, operation=) setSize(value, operation=) setTex(value) setUseShaders(val=True) Set this stimulus to use shaders if possible.
13.2.8 ElementArrayStim
class psychopy.visual.ElementArrayStim(win, units=None, eldPos=(0.0, 0.0), eldSize=(1.0, 1.0), eldShape=circle, nElements=100, sizes=2.0, xys=None, rgbs=(1.0, 1.0, 1.0), opacities=1.0, depths=0, eldDepth=0, oris=0, sfs=1.0, contrs=1, phases=0, elementTex=sin, elementMask=gauss, texRes=48, name=, autoLog=True) This stimulus class denes a eld of elements whose behaviour can be independently controlled. Suitable for creating global form stimuli or more detailed random dot stimuli. This stimulus can draw thousands of elements without dropping a frame, but in order to achieve this performance, uses several OpenGL extensions only available on modern graphics cards (supporting OpenGL2.0). See the ElementArray demo. Parameters win : a Window object (required) units [None, height, norm, cm, deg or pix ] If None then the current units of the Window will be used. See Units for the window and stimuli for explanation of other options. eldPos [] The centre of the array of elements eldSize [] The size of the array of elements (this will be overridden by setting explicit xy positions for the elements) eldShape [] The shape of the array (circle or sqr) nElements [] number of elements in the array sizes [] an array of sizes Nx1, Nx2 or a single value xys [] the xy positions of the elements, relative to the eld centre (eldPos) rgbs [] specifying the color(s) of the elements. Should be Nx1 (different greys), Nx3 (different colors) or 1x3 (for a single color) opacities [] the opacity of each element (Nx1 or a single value) depths [] the depths of the elements (Nx1), relative the overall depth of the eld (eldDepth) eldDepth [] the depth of the eld (will be added to the depths of the elements) oris [] the orientations of the elements (Nx1 or a single value) sfs [] the spatial frequencies of the elements (Nx1, Nx2 or a single value) contrs [] the contrasts of the elements, ranging -1 to +1 (Nx1 or a single value) 84 Chapter 13. Reference Manual (API)
phases [] the spatial phase of the texture on the stimulus (Nx1 or a single value) elementTex [] the texture, to be used by all elements (e.g. sin, sqr,.. , myTexture.tif, numpy.ones([48,48])) elementMask [] the mask, to be used by all elements (e.g. circle, gauss,... , myTexture.tif, numpy.ones([48,48])) texRes [] the number of pixels in the textures (overridden if an array or image is provided) name [string] The name of the objec to be using during logged messages about this stim clearTextures() Clear the textures associated with the given stimulus. As of v1.61.00 this is called automatically during garbage collection of your stimulus, so doesnt need calling explicitly by the user. draw(win=None) Draw the stimulus in its relevant window. You must call this method after every MyWin.update() if you want the stimulus to appear on that frame and then update the screen again. setContrs(value, operation=) Set the contrast for each element. Should either be: a single value an Nx1 array/list setFieldPos(value, operation=) Set the centre of the array (X,Y) setFieldSize(value, operation=) Set the size of the array on the screen (will override current XY positions of the elements) setMask(value) Change the mask (all elements have the same mask). Avoid doing this during time-critical points in your script. Uploading new textures to the graphics card can be time-consuming. setOpacities(value, operation=) Set the opacity for each element. Should either be a single value or an Nx1 array/list setOris(value, operation=) Set the orientation for each element. Should either be a single value or an Nx1 array/list setPhases(value, operation=) Set the phase for each element. Should either be: a single value an Nx1 array/list an Nx2 array/list (for separate X and Y phase) setPos(newPos=None, operation=, units=None) Obselete - users should use setFieldPos or instead of setPos setRgbs(value, operation=) Set the rgb for each element. Should either be: a single value an Nx1 array/list an Nx3 array/list setSfs(value, operation=) Set the spatial frequency for each element. Should either be:
85
a single value an Nx1 array/list an Nx2 array/list (spatial frequency of the element in X and Y). If the units for the stimulus are pix or norm then the units of sf are cycles per stimulus width. For units of deg or cm the units are c/cm or c/deg respectively. setSizes(value, operation=) Set the size for each element. Should either be: a single value an Nx1 array/list an Nx2 array/list setTex(value) Change the texture (all elements have the same base texture). Avoid this during time-critical points in your script. Uploading new textures to the graphics card can be time-consuming. setXYs(value=None, operation=) Set the xy values of the element centres (relative to the centre of the eld). Values should be: None an array/list of Nx2 coordinates. If value is None then the xy positions will be generated automatically, based on the eldSize and eldPos. In this case opacity will also be overridden by this function (it is used to make elements outside the eld invisible. updataElementColors() Create a new array of self._RGBAs updataTextureCoords() Create a new array of self._maskCoords updateElementVertices()
13.2.9 DotStim
class psychopy.visual.DotStim(win, units=, nDots=1, coherence=0.5, eldPos=(0.0, 0.0), eldSize=(1.0, 1.0), eldShape=sqr, dotSize=2.0, dotLife=3, dir=0.0, speed=0.5, rgb=None, color=(1.0, 1.0, 1.0), colorSpace=rgb, opacity=1.0, depth=0, element=None, signalDots=different, noiseDots=position, name=, autoLog=True) This stimulus class denes a eld of dots with an update rule that determines how they change on every call to the .draw() method. This standard class can be used to generate a wide variety of dot motion types. For a review of possible types and their pros and cons see Scase, Braddick & Raymond (1996). All six possible motions they describe can be generated with appropriate choices of the signalDots (which determines whether signal dots are the same or different from frame to frame), noiseDots (which determines the locations of the noise dots on each frame) and the dotLife (which determines for how many frames the dot will continue before being regenerated). Movshon-type noise uses a random position, rather than random direction, for the noise dots and the signal dots are distinct (noiseDots=different). This has the disadvantage that the noise dots not only have a random direction but also a random speed (so differ in two ways from the signal dots). The default option for DotStim is
86
that the dots follow a random walk, with the dot and noise elements being randomised each frame. This provides greater certainty that individual dots cannot be used to determine the motion direction. When dots go out of bounds or reach the end of their life they are given a new random position. As a result, to prevent inhomogeneities arising in the dots distribution across the eld, a limitted lifetime dot is strongly recommended. If further customisation is required, then the DotStim should be subclassed and its _update_dotsXY and _newDotsXY methods overridden. Parameters win : a Window object (required) units [None, norm, cm, deg or pix ] If None then the current units of the Window will be used. See Units for the window and stimuli for explanation of other options. nDots [int] number of dots to be generated eldPos [(x,y) or [x,y]] specifying the location of the centre of the stimulus. eldSize [a single value, specifying the diameter of the eld] Sizes can be negative and can extend beyond the window. eldShape [sqr or circle ] Denes the envelope used to present the dots dotSize specied in pixels (overridden if element is specied) dotLife [int] Number of frames each dot lives for (default=3, -1=innite) dir [oat (degrees)] direction of the coherent dots speed [oat] speed of the dots (in units/frame) signalDots [same or different] If same then the chosen signal dots remain the same on each frame. If different they are randomly chosen each frame. This paramater corresponds to Scase et als (1996) categories of RDK. noiseDots [position,direction or walk] Determines the behaviour of the noise dots, taken directly from Scase et als (1996) categories. For position, noise dots take a random position every frame. For direction noise dots follow a random, but constant direction. For walk noise dots vary their direction every frame, but keep a constant speed. color: Could be a: web name for a color (e.g. FireBrick); hex value (e.g. #FF0047); tuple (1.0,1.0,1.0); list [1.0,1.0, 1.0]; or numpy array. If the last three are used then the color space should also be given See Color spaces colorSpace: the color space controlling the interpretation of the color See Color spaces opacity [oat] 1.0 is opaque, 0.0 is transparent depth: The depth argument is deprecated and may be removed in future versions. Depth is controlled simply by drawing order. element [None or a visual stimulus object] This can be any object that has a .draw() method and a .setPos([x,y]) method (e.g. a PatchStim, TextStim...)!! See ElementArrayStim for a faster implementation of this idea.
87
name [string] The name of the object to be using during logged messages about this stim draw(win=None) Draw the stimulus in its relevant window. You must call this method after every MyWin.ip() if you want the stimulus to appear on that frame and then update the screen again. set(attrib, val, op=) DotStim.set() is obselete and may not be supported in future versions of PsychoPy. Use the specic method for each parameter instead (e.g. setFieldPos(), setCoherence()...) setAutoDraw(val) Add or remove a stimulus from the list of stimuli that will be automatically drawn on each ip Parameters val: True/False True to add the stimulus to the draw list, False to remove it setAutoLog(val=True) Turn on (or off) autoLogging for this stimulus. Parameters val: True (default) or False setColor(color, colorSpace=None, operation=) Set the color of the stimulus. See Color spaces for further information about the various ways to specify colors and their various implications. Parameters color [] Can be specied in one of many ways. If a string is given then it is interpreted as the name of the color. Any of the standard html/X11 color names <http://www.w3schools.com/html/html_colornames.asp> can be used. e.g.:
myStim.setColor(white) myStim.setColor(RoyalBlue)#(the case is actually ignored)
A hex value can be provided, also formatted as with web colors. This can be provided as a string that begins with # (not using pythons usual 0x000000 format):
myStim.setColor(#DDA0DD)#DDA0DD is hexadecimal for plum
You can also provide a triplet of values, which refer to the coordinates in one of the Color spaces. If no color space is specied then the color space most recently used for this stimulus is used again. myStim.setColor([1.0,-1.0,-1.0], rgb)#a red color in rgb space myStim.setColor([0.0,45.0,1.0], dkl) #DKL space with elev=0, azimuth=45 myStim.setColor([0,0,255], rgb255) #a blue stimulus using rgb255 space Lastly, a single number can be provided, x, which is equivalent to providing [x,x,x]. myStim.setColor(255, rgb255) #all guns o max colorSpace : string or None dening which of the Color spaces to use. For strings and hex values this is not needed. If None the default colorSpace for the stimulus is used (dened during initialisation). operation : one of +,-,*,/, or for no operation (simply replace value) for colors specied as a triplet of values (or single intensity value) the new value will perform this operation on the previous color 88 Chapter 13. Reference Manual (API)
thisStim.setColor([1,1,1],rgb255,+)#increment all guns by 1 value thisStim.setColor(-1, rgb, *) #multiply the color by -1 (which in this space inverts the contrast) thisStim.setColor([10,0,0], dkl, +)#raise the elevation from the isoluminant plane by 10 deg setContr(newContr, operation=) Set the contrast of the stimulus setDKL(newDKL, operation=) DEPRECATED since v1.60.05: Please use setColor setDepth(newDepth, operation=) setDir(val, op=) Change the direction of the signal dots (units in degrees) setFieldCoherence(val, op=) Change the coherence (%) of the DotStim. This will be rounded according to the number of dots in the stimulus. setFieldPos(val, op=) setLMS(newLMS, operation=) DEPRECATED since v1.60.05: Please use setColor setOpacity(newOpacity, operation=) setOri(newOri, operation=) Set the stimulus orientation in degrees setPos(newPos=None, operation=, units=None) Obselete - users should use setFieldPos or instead of setPos setRGB(newRGB, operation=) DEPRECATED since v1.60.05: Please use setColor setSize(newSize, operation=, units=None) Set the stimulus size [X,Y] in the specied (or inheritted) units setSpeed(val, op=) Change the speed of the dots (in stimulus units per second) setUseShaders(val=True) Set this stimulus to use shaders if possible.
89
13.2.10 RatingScale
class psychopy.visual.RatingScale(win, scale=None, choices=None, low=1, high=7, lowAnchorText=None, highAnchorText=None, precision=1, textSizeFactor=1.0, textColor=LightGray, textFont=Helvetica Bold, showValue=True, showScale=True, showAnchors=True, showAccept=True, acceptKeys=return, acceptPreText=key, click, acceptText=accept?, leftKeys=left, rightKeys=right, lineColor=White, markerStyle=triangle, markerColor=None, markerStart=False, markerExpansion=1, customMarker=None, escapeKeys=None, allowSkip=True, skipKeys=tab, mouseOnly=False, singleClick=False, displaySizeFactor=1.0, stretchHoriz=1.0, pos=None, minTime=1.0, name=, autoLog=True) A class for getting numeric subjective ratings, e.g., on a 1-to-7 scale. Returns a re-usable rating-scale object having a .draw() method, with a customizable visual appearance. Tries to provide useful default values. The .draw() method displays the rating scale only (not the item to be rated), handles the subjects response, and updates the display. When the subject responds, .noResponse goes False (i.e., there is a response). You can then call .getRating() to obtain the rating, .getRT() to get the decision time, or .reset() to restore the scale (for re-use). The experimenter has to handle the item to be rated, i.e., draw() it in the same window each frame. A RatingScale instance has no idea what else is on the screen. The subject can use the arrow keys (left, right) to move the marker in small increments (e.g., 1/100th of a tick-mark if precision = 100). Auto-rescaling happens if the low-anchor is 0 and high-anchor is a multiple of 10, just to reduce visual clutter. Example 1.:
*Default 7-point scale*:: myItem = <create your text, image, movie, ...> myRatingScale = visual.RatingScale(myWin) while myRatingScale.noResponse: myItem.draw() myRatingScale.draw() myWin.flip() rating = myRatingScale.getRating() decisionTime = myRatingScale.getRT()
Example 2.:
*Mouse-free*. Considerable customization is possible. For fMRI, if your in-scanner response bo use custom left, right, and accept keys, and no mouse::
** Example 3. **:
*Non-numeric* choices (categorical, unordered):: myRatingScale = visual.RatingScale(myWin, choices=[agree, disagree]) myRatingScale = visual.RatingScale(myWin, choices=[cherry, apple, True, 3.14, pie]) str(item) will be displayed, but the value returned by getResponse() will be of type you gave myRatingScale = visual.RatingScale(myWin, choices=[True, False])
90
So if you give boolean values and the subject chooses False, getResponse() will return False (
See coder Demos / stimuli / ratingScale.py for examples. Author 2010 Jeremy Gray 2011 various updates; tab is new default key to mean skip this item (escape means break out of the experiment in builder) Parameters win : A Window object (required) scale : string, explanation of the numbers to display to the subject; default = None will result in a default scale: <low>=not at all, <high>=extremely choices : a list of items which the subject can choose among; (takes precedence over low, high, lowAnchorText, highAnchorText, showScale) low : lowest numeric rating / low anchor (integer, default = 1) high : highest numeric rating / high anchor (integer, default = 7; at least low+1) lowAnchorText : text to dsiplay for the low end of the scale (default = numeric low value) highAnchorText : text to display for the high end of the scale (default = numeric high value) precision : portions of a tick to accept as input [1,10,100], default = 1 tick (no fractional parts) Note: left/right keys will move the marker by one portion of a tick. textSizeFactor : control the size of text elements of the scale. for larger than default text (expand) set > 1; for smaller, set < 1 textColor : color to use for anchor and scale text (assumed to be RGB), default = LightGray textFont : name of the font to use, default = Helvetica Bold showValue : show the subject their currently selected number, default = True showScale : show the scale text, default = True showAnchors : show the two end points of the scale (low, high), default = True showAccept : show the button to click to accept the current value by using the mouse, default = True Note: If showAccept is False and acceptKeys is empty, acceptKeys is reset to [return] to give the subject a way to respond. acceptKeys : list of keys that mean accept the current response, default = [return] acceptPreText : text to display before any value has been selected acceptText : text to display in the accept button after a value has been selected leftKeys : list of keys that mean move leftwards, default = [left] rightKeys : list of keys that mean move rightwards, default = [right] lineColor : color to use for the scale line, default = White markerStyle : triangle (DarkBlue), circle (DarkRed), or glow (White) markerColor : None = use defaults; or any legal RGB colorname, e.g., #123456, DarkRed markerStart : False, or the value in [low..high] to be pre-selected upon initial display markerExpansion : how much the glow marker expands when moving to the right; 0=none, negative shrinks; try 10 or -10 13.2. psychopy.visual - many visual stimuli 91
escapeKeys : keys that will quit the experiment, calling core.quit(). default = [ ] (none) allowSkip : if True, the subject can skip an item by pressing a key in skipKeys, default = True skipKeys : list of keys the subject can use to skip a response, default = [tab] Note: to require a response to every item, use allowSkip=False mouseOnly : require the subject use the mouse only (no keyboard), default = False. can be used to avoid competing with other objects for keyboard input. Note: mouseOnly=True and showAccept=False is a bad combination, so showAccept wins (mouseOnly is reset to False); similarly, mouseOnly and allowSkip can conict, because skipping an item is done via key press (mouseOnly wins) mouseOnly=True is helpful if there will be something else on the screen expecting keyboard input singleClick : enable a mouse click to both indicate and accept the rating, default = False. note that the accept box is visible, but clicking it has no effect, its just to display the value. a legal key press will also count as a singleClick. pos [tuple (x, y)] where to position the rating scale (x, y) in terms of the windows units (pix, norm); default (0.0, -0.4) in norm units displaySizeFactor : how much to expand or contract the overall rating scale display (not just the line length) stretchHoriz: multiplicative factor for stretching (or compressing) the scale horizontally (3 -> use the whole window); like displaySizeFactor, but only in the horizontal direction minTime : number of seconds that must elapse before a reponse can be accepted, default = 1.0s Note: For a max response time (upper limit), record the response only within a desired time window. Or present the ratingScale for as long as you like, and just ignore late responses. name [string] The name of the object to be using during logged messages about this stim draw() Update the visual display, check for response (key, mouse, skip). sets self.noResponse as appropriate. draw() only draws the rating scale, not the item to be rated getRT() Returns the seconds taken to make the rating (or to indicate skip). Returns None if no rating available. getRating() Returns the numerical rating. None if the subject skipped this item; False if not available. reset() restores the rating-scale to its post-creation state (as untouched by the subject). does not restore the scale text description (such reset is needed between items when rating multiple items)
13.2.11 Aperture
class psychopy.visual.Aperture(win, size, pos=(0, 0), ori=0, nVert=120, units=None) Used to create a shape (circular for now) to restrict a stimulus visibility area. shape=circle,
Note: This is a new stimulus (1.63.05) and is subject to change. Notably, right now it supports only a circular shape 92 Chapter 13. Reference Manual (API)
When enabled any drawing commands will only operate on pixels within the Aperture. Once disabled, subsequent draw operations affect the whole screen as usual. See demos/stimuli/aperture.py for example usage Author 2011, Yuri Spitsyn 2011, Jon Peirce added units options, Jeremy Gray added shape & orientation disable() Disable the Aperture. Any subsequent drawing operations will not be affected by the aperture until reenabled. enable() Enable the aperture so that it is used in future drawing operations NB. The Aperture is enabled by default, when created. setPos(pos, needReset=True) Set the pos (centre) of the Aperture setSize(size, needReset=True) Set the size (diameter) of the Aperture
13.3.1 TrialHandler
class psychopy.data.TrialHandler(trialList, nReps, method=random, dataTypes=None, extraInfo=None, seed=None, originPath=None) Class to handle smoothly the selection of the next trial and report current values etc. Calls to .next() will fetch the next object given to this handler, according to the method specied and will raise a StopIteration error if trials have nished See demo_trialHandler.py trialList: a simple list (or at array) of trials. addData(thisType, value, position=None) Add data for the current trial next() Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN and thisIndex If the trials have ended this method will raise a StopIteration error. This can be handled with code such as:
trials = TrialHandler(.......) for eachTrial in trials:#automatically stops when done #do stuff
or:
trials = TrialHandler(.......) while True: #ie forever try: thisTrial = trials.next() except StopIteration:#we got a StopIteration error break #break out of the forever loop #do stuff here for the trial
93
nextTrial() DEPRECATION WARNING: TrialHandler.nextTrial() will be deprecated please use Trialhandler.next() instead. jwp: 19/6/06 printAsText(stimOut=[], dataOut=(all_mean, all_std, all_raw), delim=t, trixOnly=False) Exactly like saveAsText except that the output goes to the screen instead of a le ma-
saveAsExcel(leName, sheetName=rawData, stimOut=[], dataOut=(n, all_mean, all_std, all_raw), matrixOnly=False, appendFile=True) Save a summary data le in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOfce (>=3.0). It has the advantage over the simpler text les (see TrialHandler.saveAsText() ) that data can be stored in multiple named sheets within the le. So you could have a single le named after your experiment and then have one worksheet for each participant. Or you could have one le for each participant and then multiple sheets for repeated sessions etc. The le extension .xlsx will be added if not given already. Parameters leName: string the name of the le to create or append. Can include relative or absolute path sheetName: string the name of the worksheet within the le stimOut: list of strings the attributes of the trial characteristics to be output. To use this you need to have provided a list of dictionaries specifying to trialList parameter of the TrialHandler and give here the names of strings specifying entries in that dictionary dataOut: list of strings specifying the dataType and the analysis to be performed, in the form dataType_analysis. The data can be any of the types that you added using trialHandler.data.add() and the analysis can be either raw or most things in the numpy library, including mean,std,median,max,min. e.g. rt_max will give a column of max reaction times across the trials assuming that rt values have been stored. The default values will output the raw, mean and std of all datatypes found appendFile: True or False If False any existing le with this name will be overwritten. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique. saveAsPickle(leName) Basically just saves a copy of self (with data) to a pickle le. This can be reloaded if necess and further analyses carried out. saveAsText(leName, stimOut=[], dataOut=(n, all_mean, all_std, all_raw), delim=t, matrixOnly=False, appendFile=True) Write a text le with the data and various chosen stimulus attributes Parameters leName: will have .dlm appended (so you can double-click it to open in excel) and can include path info. stimOut: the stimulus attributes to be output. To use this you need to use a list of dictionaries and give here the names of dictionary keys that you want as strings dataOut: a list of strings specifying the dataType and the analysis to be performed,in the form dataType_analysis. The data can be any of the types that you added using trialHandler.data.add() and the analysis can be either raw or most things in the
94
numpy library, including; mean,std,median,max,min... The default values will output the raw, mean and std of all datatypes found delim: allows the user to use a delimiter other than tab (, is popular with le extension .csv) matrixOnly: outputs the data with no header row or extraInfo attached appendFile: will add this output to the end of the specied le if it already exists
13.3.2 StairHandler
class psychopy.data.StairHandler(startVal, nReversals=None, stepSizes=4, nTrials=0, nUp=1, nDown=3, extraInfo=None, method=2AFC, stepType=db, minVal=None, maxVal=None, originPath=None) Class to handle smoothly the selection of the next trial and report current values etc. Calls to nextTrial() will fetch the next object given to this handler, according to the method specied. See demo_trialHandler.py The staircase will terminate when nTrials AND nReversals have been exceeded. If stepSizes was an array and has been exceeded before nTrials is exceeded then the staircase will continue to reverse Parameters startVal: The initial value for the staircase. nReversals: The minimum number of reversals permitted. If stepSizes is a list then there must also be enough reversals to satisfy this list. stepSizes: The size of steps as a single value or a list (or array). For a single value the step size is xed. For an array or list the step size will progress to the next entry at each reversal. nTrials: The minimum number of trials to be conducted. If the staircase has not reached the required number of reversals then it will continue. nUp: The number of incorrect (or 0) responses before the staircase level increases. nDown: The number of correct (or 1) responses before the staircase level decreases. extraInfo: A dictionary (typically) that will be stored along with collected data using saveAsPickle() or saveAsText() methods. stepType: species whether each step will be a jump of the given size in db, log or lin units (lin means this intensity will be added/subtracted) method: Not used and may be deprecated in future releases. stepType: db, lin, log The type of steps that should be taken each time. lin will simply add or subtract that amount each step, db and log will step by a certain number of decibels or log units (note that this will prevent your value ever reaching zero or less) minVal: None, or a number The smallest legal value for the staircase, which can be used to prevent it reaching impossible contrast values, for instance. maxVal: None, or a number The largest legal value for the staircase, which can be used to prevent it reaching impossible contrast values, for instance. addData(result) Add a 1 or 0 to signify a correct/detected or incorrect/missed trial This is essential to advance the staircase to a new intensity level!
95
calculateNextIntensity() based on current intensity, counter of correct responses and current direction next() Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN and thisIndex. If the trials have ended, calling this method will raise a StopIteration error. This can be handled with code such as:
staircase = StairHandler(.......) for eachTrial in staircase:#automatically stops when done #do stuff
or:
staircase = StairHandler(.......) while True: #ie forever try: thisTrial = staircase.next() except StopIteration:#we got a StopIteration error break #break out of the forever loop #do stuff here for the trial
nextTrial() DEPRECATION WARNING: StairHandler.nextTrial() will be deprecated please use StairHandler.next() instead. jwp: 19/6/06 printAsText(stimOut=[], dataOut=(rt_mean, rt_std, acc_raw), delim=t, matrixOnly=False) Exactly like saveAsText except that the output goes to the screen instead of a le saveAsExcel(leName, sheetName=data, matrixOnly=False, appendFile=True) Save a summary data le in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOfce (>=3.0). It has the advantage over the simpler text les (see TrialHandler.saveAsText() ) that data can be stored in multiple named sheets within the le. So you could have a single le named after your experiment and then have one worksheet for each participant. Or you could have one le for each participant and then multiple sheets for repeated sessions etc. The le extension .xlsx will be added if not given already. The le will contain a set of values specifying the staircase level (intensity) at each reversal, a list of reversal indices (trial numbers), the raw staircase/intensity level on every trial and the corresponding responses of the participant on every trial. Parameters leName: string the name of the le to create or append. Can include relative or absolute path sheetName: string the name of the worksheet within the le matrixOnly: True or False If set to True then only the data itself will be output (no additional info) appendFile: True or False If False any existing le with this name will be overwritten. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique. saveAsPickle(leName) Basically just saves a copy of self (with data) to a pickle le.
96
This can be reloaded if necess and further analyses carried out. saveAsText(leName, delim=t, matrixOnly=False) Write a text le with the data Parameters leName: a string The name of the le, including path if needed. The extension .dlm will be added if not included. delim: a string the delimitter to be used (e.g. for tab-delimitted, , for csv les) matrixOnly: True/False If True, prevents the output of the extraInfo provided at initialisation.
13.3.3 MultiStairHandler
class psychopy.data.MultiStairHandler(stairType=simple, method=random, tions=None, nTrials=50) A Handler to allow easy interleaved staircase procedures (simple or QUEST). condi-
Parameters for the staircases, as used by the relevant StairHandler or QuestHandler (e.g. the startVal, minVal, maxVal...) should be specied in the conditions list and may vary between each staircase. In particular, the conditions /must/ include the a startVal (because this is a required argument to the above handlers) a label to tag the staircase and a startValSd (only for QUEST staircases). Any parameters not specied in the conditions le will revert to the default for that individual handler. Params stairType: simple or quest Use chopy.data.QuestHandler_ a ~psychopy.data.StairHandler_ or ~psy-
method: random or sequential The stairs are shufed in each repeat but not randomised more than that (so you cant have 3 repeats of the same staircase in a row unless its the only one still running) conditions: a list of dictionaries specifying conditions Can be used to control parameters for the different staicases. Can be imported from an Excel le using psychopy.data.importTrialTypes MUST include keys providing, startVal, label and startValSd (QUEST only). The label will be used in data le saving so should be unique. See Example Usage below. nTrials=50 Minimum trials to run (but may take more if the staircase hasnt also met its minimal reversals. See ~psychopy.data.StairHandler Example usage:
conditions=[ {label:low, startVal: 0.1, ori:45}, {label:high,startVal: 0.8, ori:45}, {label:low, startVal: 0.1, ori:90}, {label:high,startVal: 0.8, ori:90}, ] stairs = MultiStairHandler(conditions=conditions, trials=50) for thisIntensity, thisCondition in stairs: thisOri = thisCondition[ori] #do something with thisIntensity and thisOri stairs.addData(correctIncorrect)#this is ESSENTIAL
97
addData(result) Add a 1 or 0 to signify a correct/detected or incorrect/missed trial This is essential to advance the staircase to a new intensity level! next() Advances to next trial and returns it. This can be handled with code such as:
staircase = MultiStairHandler(.......) for eachTrial in staircase:#automatically stops when done #do stuff here for the trial
or:
staircase = MultiStairHandler(.......) while True: #ie forever try: thisTrial = staircase.next() except StopIteration:#we got a StopIteration error break #break out of the forever loop #do stuff here for the trial
printAsText(delim=t, matrixOnly=False) Write the data to the standard output stream Parameters delim: a string the delimitter to be used (e.g. for tab-delimitted, , for csv les) matrixOnly: True/False If True, prevents the output of the extraInfo provided at initialisation. saveAsExcel(leName, matrixOnly=False, appendFile=False) Save a summary data le in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOfce (>=3.0). It has the advantage over the simpler text les (see TrialHandler.saveAsText() ) that the data from each staircase will be save in the same le, with the sheet name coming from the label given in the dictionary of conditions during initialisation of the Handler. The le extension .xlsx will be added if not given already. The le will contain a set of values specifying the staircase level (intensity) at each reversal, a list of reversal indices (trial numbers), the raw staircase/intensity level on every trial and the corresponding responses of the participant on every trial. Parameters leName: string the name of the le to create or append. Can include relative or absolute path matrixOnly: True or False If set to True then only the data itself will be output (no additional info)
98
appendFile: True or False If False any existing le with this name will be overwritten. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique. saveAsPickle(leName) Saves a copy of self (with data) to a pickle le. This can be reloaded later and further analyses carried out. saveAsText(leName, delim=t, matrixOnly=False) Write out text les with the data. For MultiStairHandler this will output one le for each staircase that was run, with _label added to the leName that you specify above (label comes from the condition dictionary you specied when you created the Handler). Parameters leName: a string The name of the le, including path if needed. The extension .dlm will be added if not included. delim: a string the delimitter to be used (e.g. for tab-delimitted, , for csv les) matrixOnly: True/False If True, prevents the output of the extraInfo provided at initialisation.
13.3.4 QuestHandler
class psychopy.data.QuestHandler(startVal, startValSd, pThreshold=0.81999999999999995, nTrials=None, stopInterval=None, method=quantile, stepType=log, beta=3.5, delta=0.01, gamma=0.5, grain=0.01, range=None, extraInfo=None, minVal=None, maxVal=None, staircase=None) Class that implements the Quest algorithm using python code from XXX. f Like StairHandler, it handles the selection of the next trial and report current values etc. Calls to nextTrial() will fetch the next object given to this handler, according to the method specied. The staircase will terminate when nTrials or XXX has been exceeded. Measure threshold using a Weibull psychometric function. Currently, it is not possible to use a different psychometric function. Threshold t is measured on an abstract intensity scale, which usually corresponds to log10 contrast. The Weibull psychometric function: p2=delta*gamma+(1-delta)*(1-(1-gamma)*exp(-10**(beta*(x2+xThreshold)))) Example:
# setup display/window ... # create stimulus stimulus = visual.RadialStim(win=win, tex=sinXsin, size=1, pos=[0,0], units=deg) ... # create staircase object # trying to find out the point where subjects response is 50/50 # if wanted to do a 2AFC then the defaults for pThreshold and gamma are good staircase = data.QuestHandler(staircase._nextIntensity, 0.2, pThreshold=0.63, gamma=0.01, nTrials=20, minVal=0, maxVal=1) ...
99
while thisContrast in staircase: # setup stimulus stimulus.setContrast(thisContrast) stimulus.draw() win.flip() core.wait(0.5) # get response ... # add response staircase.addData(thisResp) ... # can now access 1 of 3 suggested threshold levels staircase.mean() staircase.mode() staircase.quantile() #gets the median
Typical values for pThreshold are: 0.82 which is equivalent to a 3 up 1 down standard staircase 0.63 which is equivalent to a 1 up 1 down standard staircase (and might want gamma=0.01) The variable(s) nTrials and/or stopSd must be specied. beta, delta, and gamma are the parameters of the Weibull psychometric function. Parameters startVal: Prior threshold estimate or your initial guess threshold. startValSd: Standard deviation of your starting guess threshold. Be generous with the sd as QUEST will have trouble nding the true threshold if its more than one sd from your initial guess. pThreshold Your threshold criterion expressed as probability of response==1. An intensity offset is introduced into the psychometric function so that the threshold (i.e., the midpoint of the table) yields pThreshold.. nTrials: None or a number The maximum number of trials to be conducted. stopInterval: None or a number The minimum 5-95% condence interval required in the threshold estimate before stopping. If both this and nTrials is specied, whichever happens rst will determine when Quest will stop. method: quantile, mean, mode The method used to determine the next threshold to test. If you want to get a specic threshold level at the end of your staircasing, please use the quantile, mean, and mode methods directly. stepType: log, db, lin The type of steps that should be taken each time. db and log will transform your intensity levels into decibels or log units and will move along the psychometric function with these values. beta: 3.5 or a number Controls the steepness of the psychometric function. delta: 0.01 or a number The fraction of trials on which the observer presses blindly. gamma: 0.5 or a number The fraction of trials that will generate response 1 when intensity=Inf. grain: 0.01 or a number The quantization of the internal table.
100
range: None, or a number The intensity difference between the largest and smallest intensity that the internal table can store. This interval will be centered on the initial guess tGuess. QUEST assumes that intensities outside of this range have zero prior probability (i.e., they are impossible). extraInfo: A dictionary (typically) that will be stored along with collected data using saveAsPickle() or saveAsText() methods. minVal: None, or a number The smallest legal value for the staircase, which can be used to prevent it reaching impossible contrast values, for instance. maxVal: None, or a number The largest legal value for the staircase, which can be used to prevent it reaching impossible contrast values, for instance. staircase: None or StairHandler Can supply a staircase object with intensities and results. Might be useful to give the quest algorithm more information if you have it. You can also call the importData function directly. addData(result, intensity=None) Add a 1 or 0 to signify a correct/detected or incorrect/missed trial Also update the intensity calculateNextIntensity() based on current intensity and counter of correct responses confInterval(getDifference=False) give the range of the 5-95% condence interval importData(intensities, results) import some data which wasnt previously given to the quest algorithm incTrials(nNewTrials) increase maximum number of trials Updates attribute: nTrials mean() mean of Quest posterior pdf mode() mode of Quest posterior pdf next() Advances to next trial and returns it. Updates attributes; thisTrial, thisTrialN, thisIndex, nished, intensities If the trials have ended, calling this method will raise a StopIteration error. This can be handled with code such as:
staircase = QuestHandler(.......) for eachTrial in staircase:#automatically stops when done #do stuff
or:
staircase = QuestHandler(.......) while True: #ie forever try: thisTrial = staircase.next() except StopIteration:#we got a StopIteration error break #break out of the forever loop #do stuff here for the trial
nextTrial() DEPRECATION WARNING: StairHandler.nextTrial() will be deprecated please use StairHandler.next() instead. jwp: 19/6/06
101
printAsText(stimOut=[], dataOut=(rt_mean, rt_std, acc_raw), delim=t, matrixOnly=False) Exactly like saveAsText except that the output goes to the screen instead of a le quantile(p=None) quantile of Quest posterior pdf saveAsExcel(leName, sheetName=data, matrixOnly=False, appendFile=True) Save a summary data le in Excel OpenXML format workbook (xlsx) for processing in most spreadsheet packages. This format is compatible with versions of Excel (2007 or greater) and and with OpenOfce (>=3.0). It has the advantage over the simpler text les (see TrialHandler.saveAsText() ) that data can be stored in multiple named sheets within the le. So you could have a single le named after your experiment and then have one worksheet for each participant. Or you could have one le for each participant and then multiple sheets for repeated sessions etc. The le extension .xlsx will be added if not given already. The le will contain a set of values specifying the staircase level (intensity) at each reversal, a list of reversal indices (trial numbers), the raw staircase/intensity level on every trial and the corresponding responses of the participant on every trial. Parameters leName: string the name of the le to create or append. Can include relative or absolute path sheetName: string the name of the worksheet within the le matrixOnly: True or False If set to True then only the data itself will be output (no additional info) appendFile: True or False If False any existing le with this name will be overwritten. If True then a new worksheet will be appended. If a worksheet already exists with that name a number will be added to make it unique. saveAsPickle(leName) Basically just saves a copy of self (with data) to a pickle le. This can be reloaded if necess and further analyses carried out. saveAsText(leName, delim=t, matrixOnly=False) Write a text le with the data Parameters leName: a string The name of the le, including path if needed. The extension .dlm will be added if not included. delim: a string the delimitter to be used (e.g. for tab-delimitted, , for csv les) matrixOnly: True/False If True, prevents the output of the extraInfo provided at initialisation. sd() standard deviation of Quest posterior pdf simulate(tActual) returns a simulated user response to the next intensity level presented by Quest, need to supply the actual threshold level
102
13.3.5 FitWeibull
class psychopy.data.FitWeibull(xx, yy, sems=1.0, guess=None, display=1, expectedMin=0.5) Fit a Weibull function (either 2AFC or YN) of the form:
y = chance + (1.0-chance)*(1-exp( -(xx/alpha)**(beta) ))
After tting the function you can evaluate an array of x-values with fit.eval(x), retrieve the inverse of the function with fit.inverse(y) or retrieve the parameters from fit.params (a list with [alpha, beta]) eval(xx=None, params=None) inverse(yy, params=None)
13.3.6 FitLogistic
class psychopy.data.FitLogistic(xx, yy, sems=1.0, guess=None, display=1, expectedMin=0.5) Fit a Logistic function (either 2AFC or YN) of the form:
y = chance + (1-chance)/(1+exp((PSE-xx)*JND))
After tting the function you can evaluate an array of x-values with fit.eval(x), retrieve the inverse of the function with fit.inverse(y) or retrieve the parameters from fit.params (a list with [PSE, JND]) eval(xx=None, params=None) inverse(yy, params=None)
13.3.7 FitNakaRushton
class psychopy.data.FitNakaRushton(xx, yy, sems=1.0, guess=None, display=1) Fit a Naka-Rushton function of the form:
yy = rMin + (rMax-rMin) * xx**n/(xx**n+c50**n)
After tting the function you can evaluate an array of x-values with fit.eval(x), retrieve the inverse of the function with fit.inverse(y) or retrieve the parameters from fit.params (a list with [rMin, rMax, c50, n]) Note that this differs from most of the other functions in not using a value for the expected minimum. Rather, it ts this as one of the parameters of the model. eval(xx=None, params=None) inverse(yy, params=None)
103
13.3.8 FitCumNormal
class psychopy.data.FitCumNormal(xx, yy, sems=1.0, guess=None, display=1, expectedMin=0.5) Fit a Cumulative Normal function (aka error function or erf) of the form:
y = chance + (1-chance)*(special.erf(xx*xScale - xShift)/2.0+0.5)
After tting the function you can evaluate an array of x-values with t.eval(x), retrieve the inverse of the function with t.inverse(y) or retrieve the parameters from t.params (a list with [xShift, xScale]) eval(xx=None, params=None) inverse(yy, params=None)
13.3.9 importTrialList()
psychopy.data.importTrialList(leName, returnFieldNames=False) Imports a list of TrialTypes from an Excel (.xlsx) or comma-separated-value le. If leName ends .csv then import as a comma-separated-value le will be used. All other lenames will be treated as Excel 2007 (xlsx) les. Sorry no support for older versions of Excel le are planned. The le should contain one row per type of trial needed and one column for each parameter that denes the trial type. The rst row should give parameter names, which should; be unique begin with a letter (upper or lower case) contain no spaces or other punctuation (underscores are permitted)
13.3.10 functionFromStaircase()
psychopy.data.functionFromStaircase(intensities, responses, bins=10) Create a psychometric function by binning data from a staircase procedure usage:
[intensity, meanCorrect, n] = functionFromStaircase(intensities, responses, bins)
where: intensities are a list of intensities to be binned responses are a list of 0,1 each corresponding to the equivalent intensity value bins can be an integer (giving that number of bins) or unique (where each bin is made from ALL data for exactly one intensity value) intensity is the center of an intensity bin meanCorrect is mean % correct in that bin n is number of responses contributing to that mean
104
13.3.11 bootStraps()
psychopy.data.bootStraps(dat, n=1) Create a list of n bootstrapped resamples of the data SLOW IMPLEMENTATION (Python for-loop) Usage: out = bootStraps(dat, n=1) Where: dat an NxM or 1xN array (each row is a different condition, each column is a different trial) n number of bootstrapped resamples to create out dim[0]=conditions dim[1]=trials dim[2]=resamples
105
Typically you want to call mouse.clickReset() at stimulus onset, then after the button is pressed in reaction to it, the total time elapsed from the last reset to click is in mouseTimes. This is the actual RT, regardless of when the call to getPressed() was made. getRel() Returns the new position of the mouse relative to the last call to getRel or getPos, in the same units as the Window. getVisible() Gets the visibility of the mouse (1 or 0) getWheelRel() Returns the travel of the mouse scroll wheel since last call. Returns a numpy.array(x,y) but for most wheels y is the only value that will change (except mac mighty mice?) mouseMoved(distance=None, reset=False) Determine whether/how far the mouse has moved With no args returns true if mouse has moved at all since last getPos() call, or distance (x,y) can be set to pos or neg distances from x and y to see if moved either x or y that far from lastPos , or distance can be an int/oat to test if new coordinates are more than that far in a straight line from old coords. Retrieve time of last movement from self.mouseClock.getTime(). Reset can be to here or to screen coords (x,y) which allows measuring distance from there to mouse when moved. if reset is (x,y) and distance is set, then prevPos is set to (x,y) and distance from (x,y) to here is checked, mouse.lastPos is set as current (x,y) by getPos(), mouse.prevPos holds lastPos from last time mouseMoved was called setPos(newPos=(0, 0)) Sets the current postiion of the mouse (pygame only), in the same units as the Window (0,0) is at centre Parameters newPos [(x,y) or [x,y]] the new position on the screen setVisible(visible) Sets the visibility of the mouse to 1 or 0 NB when the mouse is not visible its absolute position is held at (0,0) to prevent it from going off the screen and getting lost! You can still use getRel() in that case. psychopy.event.clearEvents(eventType=None) Clears all events currently in the event buffer. Optional argument, eventType, species only certain types to be cleared Parameters eventType [None, mouse, joystick, keyboard ] If this is not None then only events of the given type are cleared psychopy.event.getKeys(keyList=None, timeStamped=False) Returns a list of keys that were pressed. Parameters keyList [None or []] Allows the user to specify a set of keys to check for. Only keypresses from this set of keys will be removed from the keyboard buffer. If the keyList is None all keys will be checked and the key buffer will be cleared completely. NB, pygame doesnt return timestamps (they are always 0)
106
timeStamped [False or True or Clock] If True will return a list of tuples instead of a list of keynames. Each tuple has (keyname, time). If a core.Clock is given then the time will be relative to the Clocks last reset Author 2003 written by Jon Peirce 2009 keyList functionality added by Gary Strangman 2009 timeStamped code provided by Dave Britton psychopy.event.waitKeys(maxWait=None, keyList=None) Halts everything (including drawing) while awaiting input from keyboard. Then returns list of keys pressed. Implicitly clears keyboard, so any preceding keypresses will be lost. Optional arguments specify maximum wait period and which keys to wait for. Returns None if times out. psychopy.event.xydist(p1=[0.0, 0.0], p2=[0.0, 0.0]) Helper function returning the cartesian distance between p1 and p2
107
cutoff [oat] relative cutoff frequency of the lter (0 - 1.0) n [int, optional] order of the lter, the higher n is the sharper the transition is. Returns numpy.ndarray lter kernel in 2D centered psychopy.filters.butter2d_lp_elliptic(size, cutoff_x, cutoff_y, n=3, alpha=0, offset_x=0, offset_y=0) Butterworth lowpass lter of any elliptical shape. Parameters size [tuple] size of the lter cutoff_x, cutoff_y [oat, oat] relative cutoff frequency of the lter (0 - 1.0) for x and y axes alpha [oat, optional] rotation angle (in radians) offset_x, offset_y [oat] offsets for the ellipsoid n [int, optional] order of the lter, the higher n is the sharper the transition is. Returns numpy.ndarray: lter kernel in 2D centered psychopy.filters.conv2d(smaller, larger) convolve a pair of 2d numpy matrices Uses fourier transform method, so faster if larger matrix has dimensions of size 2**n Actually right now the matrices must be the same size (will sort out padding issues another day!) psychopy.filters.getRMScontrast(matrix) Returns the RMS contrast (the sample standard deviation) of a array psychopy.filters.imfft(X) Perform 2D FFT on an image and center low frequencies psychopy.filters.imifft(X) Inverse 2D FFT with decentering psychopy.filters.makeGauss(x, mean=0.0, sd=1.0, gain=1.0, base=0.0) Return the gaussian distribution for a given set of x-vals Parameters mean: oat the centre of the distribution sd: oat the width of the distribution gain: oat the height of the distribution base: oat an offset added to the result psychopy.filters.makeGrating(res, ori=0.0, cycles=1.0, phase=0.0, gratType=sin, contr=1.0) Make an array containing a luminance grating of the specied params Parameters res: integer the size of the resulting matrix on both dimensions (e.g 256) ori: oat or int (default=0.0) the orientation of the grating in degrees cycles:oat or int (default=1.0) the number of grating cycles within the array
108
phase: oat or int (default=0.0) the phase of the grating in degrees (NB this differs to most PsychoPy phase arguments which use units of fraction of a cycle) gratType: sin, sqr, ramp or sinXsin (default=sin) the type of grating to be drawn contr: oat (default=1.0) contrast of the grating Returns a square numpy array of size resXres psychopy.filters.makeMask(matrixSize, shape=circle, radius=1.0, center=(0.0, 0.0), range=[-1, 1]) Returns a matrix to be used as an alpha mask (circle,gauss,ramp) Parameters matrixSize: integer the size of the resulting matrix on both dimensions (e.g 256) shape: circle,gauss,ramp (linear gradient from center) shape of the mask radius: oat scale factor to be applied to the mask (circle with radius of [1,1] will extend just to the edge of the matrix). Radius can asymmetric, e.g. [1.0,2.0] will be wider than it is tall. center: 2x1 tuple or list (default=[0.0,0.0]) the centre of the mask in the matrix ([1,1] is topright corner, [-1,-1] is bottom-left) psychopy.filters.makeRadialMatrix(matrixSize, center=(0.0, 0.0), radius=1.0) Generate a square matrix where each element val is its distance from the centre of the matrix Parameters matrixSize: integer the size of the resulting matrix on both dimensions (e.g 256) radius: oat scale factor to be applied to the mask (circle with radius of [1,1] will extend just to the edge of the matrix). Radius can asymmetric, e.g. [1.0,2.0] will be wider than it is tall. center: 2x1 tuple or list (default=[0.0,0.0]) the centre of the mask in the matrix ([1,1] is topright corner, [-1,-1] is bottom-left) psychopy.filters.maskMatrix(matrix, shape=circle, radius=1.0, center=(0.0, 0.0)) Make and apply a mask to an input matrix (e.g. a grating) Parameters matrix: a square numpy array array to which the mask should be applied shape: circle,gauss,ramp (linear gradient from center) shape of the mask radius: oat scale factor to be applied to the mask (circle with radius of [1,1] will extend just to the edge of the matrix). Radius can asymmetric, e.g. [1.0,2.0] will be wider than it is tall. center: 2x1 tuple or list (default=[0.0,0.0]) the centre of the mask in the matrix ([1,1] is topright corner, [-1,-1] is bottom-left)
109
info = {Observer:jwp, GratingOri:45, ExpVersion: 1.1} infoDlg = gui.DlgFromDict(dictionary=info, title=TestExperiment, fixed=[ExpVersion]) if infoDlg.OK: print info else: print User Cancelled
In the code above, the contents of info will be updated to the values returned by the dialogue box. If the user cancels (rather than pressing OK), then the dictionary remains unchanged. If you want to check whether the user hit OK, then check whether DlgFromDict.OK equals True or False See GUI.py for a usage demo, including order and tip (tooltip).
13.6.2 Dlg
class psychopy.gui.Dlg(title=PsychoPy dialogue, pos=None, size=wx.Size(-1, -1), style=536877057) A simple dialogue box. You can add text or input boxes (sequentially) and then retrieve the values. see also the function dlgFromDict for an even simpler version Example:
from psychopy import gui myDlg = gui.Dlg(title="JWPs experiment") myDlg.addText(Subject info) myDlg.addField(Name:) myDlg.addField(Age:, 21) myDlg.addText(Experiment Info) myDlg.addField(Grating Ori:,45) myDlg.show()#show dialog and wait for OK or Cancel if gui.OK:#then the user pressed OK thisInfo = myDlg.data print thisInfo else: print user cancelled
addField(label=, initial=, color=, tip=) Adds a (labelled) input eld to the dialogue box, optional text color and tooltip. Returns a handle to the eld (but not to the label). addFixedField(label=, value=, tip=) Adds a eld to the dialogue box (like addField) but the eld cannot be edited. e.g. Display experiment version. tool-tips are disabled (by wx). addText(text, color=) show() Presents the dialog and waits for the user to press either OK or CANCEL. This function returns nothing. When they do, dlg.OK will be set to True or False (according to which button they pressed. If OK==True then dlg.data will be populated with a list of values coming from each of the input elds created.
13.6.3 fileOpenDlg
class psychopy.gui.fileOpenDlg A simple dialogue allowing access to the le system. (Useful in case you collect an hour of data and then try to save to a non-existent directory!!) 110 Chapter 13. Reference Manual (API)
Parameters tryFilePath: string default le path on which to open the dialog tryFilePath: string default le name, as suggested le prompt: string (default Select le to open) can be set to custom prompts allowed: string (available since v1.62.01) a string to specify le lters. e.g. BMP les (.bmp)|.bmp|GIF les (.gif)|.gif See http://www.wxpython.org/docs/api/wx.FileDialogclass.html for further details If tryFilePath or tryFileName are empty or invalid then current path and empty names are used to start search. If user cancels, then None is returned.
13.6.4 fileSaveDlg
class psychopy.gui.fileSaveDlg A simple dialogue allowing access to the le system. (Useful in case you collect an hour of data and then try to save to a non-existent directory!!) Parameters initFilePath: string default le path on which to open the dialog initFilePath: string default le name, as suggested le prompt: string (default Select le to open) can be set to custom prompts allowed: string a string to specify le lters. e.g. BMP les (.bmp)|.bmp|GIF les (.gif)|.gif See http://www.wxpython.org/docs/api/wx.FileDialog-class.html for further details If initFilePath or initFileName are empty or invalid then current path and empty names are used to start search. If user cancels the None is returned.
See
111
dev.reset_rt_timer() while True: dev.poll_for_response() if dev.response_queue_size() > 0: response = dev.get_next_response() # do something with the response
Useful functions pyxid.get_xid_device(device_number) returns device at a given index. Raises ValueError if the device at the passed in index doesnt exist. pyxid.get_xid_devices() Returns a list of all Xid devices connected to your computer. Device classes class pyxid.ResponseDevice(connection, name=Unknown XID Device, ger_prex=Button) clear_response_queue() Clears the response queue get_next_response() Pops the response at the beginning of the response queue and returns it. This function returns a dict object with the following keys: pressed: A boolean value of whether the event was a keypress or key release. key: The key on the device that was pressed. This is a 0 based index. port: Device port the response came from. Typically this is 0 on RB-series devices, and 2 on SV-1 voice key devices. time: For the time being, this just returns 0. There is currently an issue with clock drift in the Cedrus XID devices. Once we have this issue resolved, time will report the value of the RT timer in miliseconds. poll_for_response() Polls the device for user input If there is a keymapping for the device, the key map is applied to the key reported from the device. If a response is waiting to be processed, the response is appended to the internal response_queue response_queue_size() Number of responses in the response queue class pyxid.XidDevice(xid_connection) Class for interfacing with a Cedrus XID device. At the beginning of an experiment, the developer should call: XidDevice.reset_base_timer() Whenever a stimulus is presented, the developer should call: keymap=None, trig-
112
XidDevice.reset_rt_timer() Developers Note: Currently there is a known issue of clock drift in the XID devices. Due to this, the dict returned by XidDevice.get_next_response() returns 0 for the reaction time value. This issue will be resolved in a future release of this library. init_device() Initializes the device with the proper keymaps and name
setContrast(contrast, LUTrange=1.0) Optional parameter LUTrange determines which entries of the LUT will be set to this contrast Parameters contrast [oat in the range 0:1] The contrast for the range being set LUTrange [oat or array] If a oat is given then this is the fraction of the LUT to be used. If an array of oats is given, these will specify the start/stop points as fractions of the LUT. If an array of ints (0-255) is given these determine the start stop indices of the LUT Examples: setContrast(1.0,0.5) will set the central 50% of the LUT so that a stimulus with contr=0.5 will actually be drawn with contrast 1.0 setContrast(1.0,[0.25,0.5]) setContrast(1.0,[63,127]) will set the lower-middle quarter of the LUT (which might be useful in LUT animation paradigms) setGamma(newGamma) Set the LUT to have the requested gamma. Currently also resets the LUT to be a linear contrast ramp spanning its full range. May change this to read the current LUT, undo previous gamm and then apply new one? setLUT(newLUT=None, gammaCorrect=True, LUTrange=1.0) Sets the LUT to a specic range of values. Note that, if you leave gammaCorrect=True then any LUT values you supply will automatically be gamma corrected. If BitsBox setMethod is fast then the LUT will take effect on the next Window.update() If the setMethod is slow then the update will take place over the next 1-4secs down the USB port. Examples: bitsBox.setLUT() builds a LUT using bitsBox.contrast and bitsBox.gamma
113
bitsBox.setLUT(newLUT=some256x1array) (NB array should be oat 0.0:1.0) Builds a luminance LUT using newLUT for each gun (actually array can be 256x1 or 1x256) bitsBox.setLUT(newLUT=some256x3array) (NB array should be oat 0.0:1.0) Allows you to use a different LUT on each gun (NB by using BitsBox.setContr() and BitsBox.setGamma() users may not need this function!?)
For an example see the demos menu of the PsychoPy Coder For further documentation see the pynetstation website
class psychopy.hardware.forp.ButtonBox(serialPort=1) Serial line interface to the fORP MRI response box Set the box use setting 0 or 1 and connect the serial line to use this object class. (Alternatively connect the USB cable and use fORP to emulate a keyboard). fORP sends characters at 800Hz, so you should check the buffer frequently. Also note that the trigger event numpy the fORP is typically extremely short (occurs for a single 800Hz epoch). serialPort should be a number (where 1=COM1,...) clearBuffer() Empty the input buffer of all characters getEvents(returnRaw=False) Returns a list of unique events (one event per button pressed) AND stores a copy of the full list of events since last getEvents() (stored as ForpBox.rawEvts) getUniqueEvents(fullEvts=None) Returns a Python set of the unique (unordered) events of either a list given or the current rawEvts buffer
13.7.5 ioLab
Interface to ioLab button box This is currently a simple import of the ioLab python library. That needs to be installed (but is included in the Standalone distributions of PsychoPy as of version 1.62.01). installation: 114 Chapter 13. Reference Manual (API)
easy_install iolabs
usage:
from psychopy.hardware import ioLabs
for examples see the demos menu of the PsychoPy Coder or go to the URL above.
NOT:
import u3
In all other respects the library is the same and instructions on how to use it can be found here: http://labjack.com/support/labjackpython Note: To use labjack devices you do need also to install the driver software described on the page above
13.7.7 Minolta
Minolta light-measuring devices See http://www.konicaminolta.com/instruments
class psychopy.hardware.minolta.LS100(port, maxAttempts=1) A class to dene a Minolta LS100 (or LS110?) photometer You need to connect a LS100 to the serial (RS232) port and when you turn it on press the F key on the device. This will put it into the correct mode to communicate with the serial port. usage:
from psychopy.hardware import minolta phot = minolta.LS100(port) if phot.OK:#then we successfully made a connection and can send/receive print phot.getLum()
Parameters port: string the serial port that should be checked maxAttempts: int If the device doesnt respond rst time how many attempts should be made? If youre certain that this is the correct port and the device is on and correctly congured then this could be set high. If not then set this low. Troubleshooting Various messages are printed to the log regarding the function of this device, but to see them you need to set the printing of the log to the correct level:
115
from psychopy import log log.console.setLevel(log.ERROR)#error messages only log.console.setLevel(log.INFO)#will give a little more info log.console.setLevel(log.DEBUG)#will export a log of all communications
If youre using a keyspan adapter (at least on OS X) be aware that it needs a driver installed. Otherwise no ports wil be found. Error messages: ERROR: Couldnt connect to Minolta LS100/110 on ____: This likely means that the device is not connected to that port (although the port has been found and opened). Check that the device has the [ in the bottom right of the display; if not turn off and on again holding the F key. ERROR: No reply from LS100: The port was found, the connection was made and an initial command worked, but then the device stopped communating. If the rst measurement taken with the device after connecting does not yield a reasonble intensity the device can sulk (not a technical term!). The [ on the display will disappear and you can no longer communicate with the device. Turn it off and on again (with F depressed) and use a reasonably bright screen for your rst measurement. Subsequent measurements can be dark (or we really would be in trouble!!). checkOK(msg) Check that the message from the photometer is OK. If theres an error print it. Then return True (OK) or False. clearMemory() Clear the memory of the device from previous measurements getLum() Makes a measurement and returns the luminance value measure() Measure the current luminance and set .lastLum to this value sendMessage(message, timeout=5.0) Send a command to the photometer and wait an alloted timeout for a response. setMaxAttempts(maxAttempts) Changes the number of attempts to send a message and read the output Typically this should be low initially, if you arent sure that the device is setup correctly but then, after the rst successful reading, set it higher. setMode(mode=04) Set the mode for measurements. Returns True (success) or False 04 means absolute measurements. 08 = peak 09 = cont See user manual for other modes
13.7.8 PhotoResearch
Supported devices: PR650 PR655/PR670
116
class psychopy.hardware.pr.PR650(port, verbose=None) An interface to the PR650 via the serial port. (Added in version 1.63.02) example usage:
from psychopy.hardware.pr import PR650 myPR650 = PR650(port) myPR650.getLum()#make a measurement nm, power = myPR650.getLastSpectrum()#get a power spectrum for the last measurement
NB psychopy.hardware.findPhotometer() will locate and return any supported device for you so you can also do:
from psychopy import hardware phot = hardware.findPhotometer() print phot.getLum()
Troubleshooting Various messages are printed to the log regarding the function of this device, but to see them you need to set the printing of the log to the correct level:
from psychopy import log log.console.setLevel(log.ERROR)#error messages only log.console.setLevel(log.INFO)#will give a little more info log.console.setLevel(log.DEBUG)#will export a log of all communications
If youre using a keyspan adapter (at least on OS X) be aware that it needs a driver installed. Otherwise no ports wil be found. Also note that the attempt to connect to the PR650 must occur within the rst few seconds after turning it on. getLastLum() This retrieves the luminance (in cd/m**2) from the last call to .measure() getLastSpectrum(parse=True) This retrieves the spectrum from the last call to .measure() If parse=True (default): The format is a num array with 100 rows [nm, power] otherwise: The output will be the raw string from the PR650 and should then be passed to .parseSpectrumOutput(). Its more efcient to parse R,G,B strings at once than each individually. getLum() Makes a measurement and returns the luminance value getSpectrum(parse=True) Makes a measurement and returns the current power spectrum If parse=True (default): The format is a num array with 100 rows [nm, power] If parse=False (default): The output will be the raw string from the PR650 and should then be passed to .parseSpectrumOutput(). Its slightly more efcient to parse R,G,B strings at once than each individually.
117
measure(timeOut=30.0) Make a measurement with the device. For a PR650 the device is instructed to make a measurement and then subsequent commands are issued to retrieve info about that measurement parseSpectrumOutput(rawStr) Parses the strings from the PR650 as received after sending the command d5. The input argument rawStr can be the output from a single phosphor spectrum measurement or a list of 3 such measurements [rawR, rawG, rawB]. sendMessage(message, timeout=0.5, DEBUG=False) Send a command to the photometer and wait an alloted timeout for a response (Timeout should be long for low light measurements) class psychopy.hardware.pr.PR655(port) An interface to the PR655/PR670 via the serial port. example usage:
from psychopy.hardware.pr import PR655 myPR655 = PR655(port) myPR655.getLum()#make a measurement nm, power = myPR655.getLastSpectrum()#get a power spectrum for the last measurement
NB psychopy.hardware.findPhotometer() will locate and return any supported device for you so you can also do:
from psychopy import hardware phot = hardware.findPhotometer() print phot.getLum()
Troubleshooting If the device isnt responding try turning it off and turning it on again, and/or disconnecting/reconnecting the USB cable. It may be that the port has become controlled by some other program. endRemoteMode() Puts the colorimeter back into normal mode getDeviceSN() Return the device serial number getDeviceType() Return the device type (e.g. PR-655 or PR-670) getLastColorTemp() Fetches (from the device) the color temperature (K) of the last measurement Returns list: status, units, exponent, correlated color temp (Kelvins), CIE 1960 deviation See also measure() automatically populates pr655.lastColorTemp with the color temp in Kelvins getLastSpectrum(parse=True) This retrieves the spectrum from the last call to measure() If parse=True (default): The format is a num array with 100 rows [nm, power] otherwise:
118
The output will be the raw string from the PR650 and should then be passed to parseSpectrumOutput(). Its more efcient to parse R,G,B strings at once than each individually. getLastTristim() Fetches (from the device) the last CIE 1931 Tristimulus values Returns list: status, units, Tristimulus Values See also measure() automatically populates pr655.lastTristim with just the tristimulus coordinates getLastUV() Fetches (from the device) the last CIE 1976 u,v coords Returns list: status, units, Photometric brightness, u, v See also measure() automatically populates pr655.lastUV with [u,v] getLastXY() Fetches (from the device) the last CIE 1931 x,y coords Returns list: status, units, Photometric brightness, x,y See also measure() automatically populates pr655.lastXY with [x,y] measure(timeOut=30.0) Make a measurement with the device. This automatically populates: .lastLum .lastSpectrum .lastCIExy .lastCIEuv parseSpectrumOutput(rawStr) Parses the strings from the PR650 as received after sending the command D5. The input argument rawStr can be the output from a single phosphor spectrum measurement or a list of 3 such measurements [rawR, rawG, rawB]. sendMessage(message, timeout=0.5, DEBUG=False) Send a command to the photometer and wait an alloted timeout for a response (Timeout should be long for low light measurements) startRemoteMode() Sets the Colorimeter into remote mode
You do need to install the Display Software (which they also call Eyelink Developers Kit) for your particular platform. This can be found by following the threads from: https://www.sr-support.com/forums/forumdisplay.php?f=17 for pylink documentation see: 13.7. psychopy.hardware - hardware interfaces 119
https://www.sr-support.com/forums/showthread.php?t=14 Performing research with eye-tracking equipment typically requires a long-term investment in software tools to collect, process, and analyze data. Much of this involves real-time data collection, saccadic analysis, calibration routines, and so on. The EyeLink eye-tracking system is designed to implement most of the required software base for data collection and conversion. It is most powerful when used with the Ethernet link interface, which allows remote control of data collection and real-time data transfer. The PyLink toolkit includes Pylink module, which implements all core EyeLink functions and classes for EyeLink connection and the eyelink graphics, such as the display of camera image, calibration, validation, and drift correct. The EyeLink graphics is currently implemented using Simple Direct Media Layer (SDL: www.libsdl.org). The Pylink library contains a set of classes and functions, which are used to program experiments on many different platforms, such as MS-DOS, Windows, Linux, and the Macintosh. Some programming standards, such as placement of messages in the EDF le by your experiment, and the use of special data types, have been implemented to allow portability of the development kit across platforms. The standard messages allow general analysis tools such as EDF2ASC converter or EyeLink Data Viewer to process your EDF les. psychopy.hardware.findPhotometer(ports=None, device=None) Try to nd a connected photometer/photospectrometer! PsychoPy will sweep a series of serial ports trying to open them. If a port successfully opens then it will try to issue a command to the device. If it responds with one of the expected values then it is assumed to be the appropriate device. Parameters ports [a list of ports to search] Each port can be a string (e.g. COM1, /dev/tty.Keyspan1.1) or a number (for win32 comports only). If none are provided then PsychoPy will sweep COM0-10 on win32 and search known likely port names on OS X and linux. device [string giving expected device (e.g. PR650, PR655, LS110).] If this is not given then an attempt will be made to nd a device of any type, but this often fails Returns An object representing the rst photometer found None if the ports didnt yield a valid response -1 if there were not even any valid ports (suggesting a driver not being installed) e.g.:
photom = findPhotometer(device=PR655) #sweeps ports 0 to 10 searching for a PR655 print photom.getLum() if hasattr(photom, getSpectrum):#can retrieve spectrum (e.g. a PR650) print photom.getSpectrum()
120
Author 2010 written by Jeremy Gray, with input from Jon Peirce and Alex Holcombe Parameters win [None, False, Window instance] what window to use for refresh rate testing (if any) and settings. None -> temporary window using defaults; False -> no window created, used, nor proled; a Window() instance you have already created author [None, string] None = try to autodetect rst __author__ in sys.argv[0]; string = usersupplied author info (of an experiment) version [None, string] None = try to autodetect rst __version__ in sys.argv[0]; string = usersupplied version info (of an experiment) verbose : False, True; how much detail to assess refreshTest [None, False, True, grating] True or grating = assess refresh average, median, and SD of 60 win.ip()s, using visual.getMsPerFrame() grating = show a visual during the assessment; True = assess without a visual userProcsDetailed: False, True get details about concurrent users processses (command, process-ID) randomSeed: None a way for the user to record, and optionally set, a random seed for making reproducible random sequences set:XYZ will both record the seed, XYZ, and set it: random.seed(XYZ); numpy.random.seed() is NOT set None defaults to python default; time = use time.time() as the seed, as obtained during RunTimeInfo() randomSeed=set:time will give a new random seq every time the script is run, with the seed recorded. Returns a at dict (but with several groups based on key names): psychopy [version, rush() availability] psychopyVersion, psychopyHaveExtRush, git branch and current commit hash if available experiment [author, version, directory, name, current time-stamp, ] SHA1 digest, VCS info (if any, svn or hg only), experimentAuthor, experimentVersion, ... system [hostname, platform, user login, count of users, user process info (count, cmd + pid), agged processes] systemHostname, systemPlatform, ... window [(see output; many details about the refresh rate, window, and monitor; units are noted)] windowWinType, windowWaitBlanking, ...windowRefreshTimeSD_ms, ... windowMonitor.<details>, ... python [version of python, versions of key packages (wx, numpy, scipy, matplotlib, pyglet, pygame)] pythonVersion, pythonScipyVersion, ... openGL [version, vendor, rendering engine, plus info on whether several extensions are present] openGLVersion, ..., openGLextGL_EXT_framebuffer_object, ...
121
When setting the level for a particular log target (e.g. LogFile) the user can set the minimum level that is required for messages to enter the log. For example, setting a level of INFO will result in INFO, EXP, DATA, WARNING and ERROR messages to be recorded but not DEBUG messages. By default, PsychoPy will record messages of WARNING level and above to the console. The user can silence that by setting it to receive only CRITICAL messages, (which PsychoPy doesnt use) using the commands:
from psychopy import log log.console.setLevel(log.CRITICAL)
class psychopy.log.LogFile(f=None, level=30, lemode=a, logger=None, encoding=utf8) A text stream to receive inputs from the logging system Create a log le as a target for logged entries of a given level Parameters f: this could be a string to a path, that will be created if it doesnt exist. Alternatively this could be a le object, sys.stdout or any object that supports .write() and .ush() methods level: The minimum level of importance that a message must have to be logged by this target. mode: a, w Append or overwrite existing log le setLevel(level) Set a new minimal level for the log le/stream write(txt) Write directy to the log le (without using logging functions). Useful to send messages that only this le receives psychopy.log.addLevel(level, levelName) Associate levelName with level. This is used when converting levels to text during message formatting. psychopy.log.critical(msg, t=None, obj=None) log.critical(message) Send the message to any receiver of logging info (e.g. a LogFile) of level log.CRITICAL or higher psychopy.log.data(msg, t=None, obj=None) Log a message about data collection (e.g. a key press) usage:: log.data(message) Sends the message to any receiver of logging info (e.g. a LogFile) of level log.DATA or higher psychopy.log.debug(msg, t=None, obj=None) Log a debugging message (not likely to be wanted once experiment is nalised) usage:: log.debug(message) Sends the message to any receiver of logging info (e.g. a LogFile) of level log.DEBUG or higher psychopy.log.error(msg, t=None, obj=None) log.error(message) Send the message to any receiver of logging info (e.g. a LogFile) of level log.ERROR or higher psychopy.log.exp(msg, t=None, obj=None) Log a message about the experiment (e.g. a new trial, or end of a stimulus) usage:: log.exp(message) Sends the message to any receiver of logging info (e.g. a LogFile) of level log.EXP or higher 122 Chapter 13. Reference Manual (API)
psychopy.log.fatal(msg, t=None, obj=None) log.critical(message) Send the message to any receiver of logging info (e.g. a LogFile) of level log.CRITICAL or higher psychopy.log.flush(logger=<psychopy.log._Logger instance at 0x1b65940>) Send current messages in the log to all targets psychopy.log.getLevel(level) Return the textual representation of logging level level. If the level is one of the predened levels (CRITICAL, ERROR, WARNING, INFO, DEBUG) then you get the corresponding string. If you have associated levels with names using addLevelName then the name you have associated with level is returned. If a numeric value corresponding to one of the dened levels is passed in, the corresponding string representation is returned. Otherwise, the string Level %s % level is returned. psychopy.log.info(msg, t=None, obj=None) Log some information - maybe useful, maybe not usage:: log.info(message) Sends the message to any receiver of logging info (e.g. a LogFile) of level log.INFO or higher psychopy.log.log(msg, level, t=None, obj=None) Log a message usage:: log(level, msg, t=t, obj=obj) Log the msg, at a given level on the root logger psychopy.log.setDefaultClock(clock) Set the default clock to be used to reference all logging times. Must be a psychopy.core.Clock object. Beware that if you reset the clock during the experiment then the resets will be reected here. That might be useful if you want your logs to be reset on each trial, but probably not. psychopy.log.warn(msg, t=None, obj=None) log.warning(message) Sends the message to any receiver of logging info (e.g. a LogFile) of level log.WARNING or higher psychopy.log.warning(msg, t=None, obj=None) log.warning(message) Sends the message to any receiver of logging info (e.g. a LogFile) of level log.WARNING or higher
13.9.1 flush()
psychopy.log.flush(logger=<psychopy.log._Logger instance at 0x1b65940>) Send current messages in the log to all targets
13.9.2 setDefaultClock()
psychopy.log.setDefaultClock(clock) Set the default clock to be used to reference all logging times. Must be a psychopy.core.Clock object. Beware that if you reset the clock during the experiment then the resets will be reected here. That might be useful if you want your logs to be reset on each trial, but probably not.
123
psychopy.misc.float_uint16(inarray) Converts arrays, lists, tuples and oats ranging -1:1 into an array of Uint16s ranging 0:2^16 psychopy.misc.float_uint8(inarray) Converts arrays, lists, tuples and oats ranging -1:1 into an array of Uint8s ranging 0:255
>>> float_uint8(-1) 0 >>> float_uint8(0) 128
psychopy.misc.fromFile(lename) load data (of any sort) from a pickle le simple wrapper of the cPickle module in core python psychopy.misc.image2array(im) Takes an image object (PIL) and returns an array fredrik lundh, october 1998 [email protected] http://www.pythonware.com psychopy.misc.makeImageAuto(inarray) Combines oat_uint8 and image2array operations ie. scales a numeric array from -1:1 to 0:255 and converts to PIL image format psychopy.misc.mergeFolder(src, dst, pattern=None) Merge a folder into another.
124
Existing les in dst with the same name will be overwritten. Non-existent les/folders will be created. psychopy.misc.pix2cm(pixels, monitor) Convert size in pixels to size in cm for a given Monitor object psychopy.misc.pix2deg(pixels, monitor) Convert size in pixels to size in degrees for a given Monitor object psychopy.misc.plotFrameIntervals(intervals) Plot a histogram of the frame intervals. Arguments: intervals: Either a lename to a log le, saved by Window.saveFrameIntervals or simply a list (or array of frame intervals) psychopy.misc.pol2cart(theta, radius, units=deg) Convert from polar to cartesian coordinates usage: x,y = pol2cart(theta, radius, units=deg) psychopy.misc.radians(degrees) Convert degrees to radians
>>> radians(180) 3.1415926535897931 >>> degrees(45) 0.78539816339744828
psychopy.misc.ratioRange(start, nSteps=None, stop=None, stepRatio=None, stepdB=None, stepLogUnits=None) Creates a array where each step is a constant ratio rather than a constant addition. Specify start and any 2 of, nSteps, stop, stepRatio, stepdB, stepLogUnits
>>> ratioRange(1,nSteps=4,stop=8) array([ 1., 2., 4., 8.]) >>> ratioRange(1,nSteps=4,stepRatio=2) array([ 1., 2., 4., 8.]) >>> ratioRange(1,stop=8,stepRatio=2) array([ 1., 2., 4., 8.])
psychopy.misc.shuffleArray(inArray, shufeAxis=-1, seed=None) Takes a (at) num array, list or string and returns a shufed version as a num array with the same shape. Optional argument ShufeAxis determines the axis to shufe along (default=-1 meaning shufe across entire matrix?) THIS DOESNT WORK WITH MATRICES YET - ONLY FLAT ARRAYS - APPEARS TO BE BUG IN EITHER NUMPY.ARGSORT() OR NUMPY.TAKE() psychopy.misc.toFile(lename, data) save data (of any sort) as a pickle le simple wrapper of the cPickle module in core python psychopy.misc.uint8_float(inarray) Converts arrays, lists, tuples and UINTs ranging 0:255 into an array of oats ranging -1:1
override. eg:
from psychopy import visual, monitors mon = monitors.Monitor(SonyG55)#fetch the most recent calib for this monitor mon.setDistance(114)#further away than normal? win = visual.Window(size=[1024,768], monitor=mon)
You might also want to fetch the Photometer class for conducting your own calibrations
13.11.1 Monitor
class psychopy.monitors.Monitor(name, width=None, distance=None, gamma=None, notes=None, useBits=None, verbose=True, currentCalib={}) Creates a monitor object for storing calibration details. This will be loaded automatically from disk if the monitor name is already dened (see methods). Many settings from the stored monitor can easilly be overridden either by adding them as arguments during the initial call. arguments: width, distance, gamma are details about the calibration notes is a text eld to store any useful info useBits True, False, None verbose True, False, None currentCalib is a dict object containing various elds for a calibration. Use with caution since the dict may not contain all the necessary elds that a monitor object expects to nd. eg: myMon = Monitor(sony500, distance=114) Fetches the info on the sony500 and overrides its usual distance to be 114cm for this experiment. myMon = Monitor(sony500) followed by... myMon[distance]=114 ...does the same! For both methods, if you then save any modications will be saved as well. copyCalib(calibName=None) Stores the settings for the current calibration settings as new monitor. delCalib(calibName) Remove a specic calibration from the current monitor. Wont be nalised unless monitor is saved gammaIsDefault() getCalibDate() As a python date object (convert to string using calibTools.strFromDate getDKL_RGB(RECOMPUTE=False) Returns the DKL->RGB conversion matrix. If one has been saved this will be returned. Otherwise, if power spectra are available for the monitor a matrix will be calculated. getDistance() Returns distance from viewer to the screen in cm, or None if not known getGamma()
126
getGammaGrid() Gets the min,max,gamma values for the each gun getLMS_RGB(RECOMPUTE=False) Returns the LMS->RGB conversion matrix. If one has been saved this will be returned. Otherwise (if power spectra are available for the monitor) a matrix will be calculated. getLevelsPost() Gets the measured luminance values from last calibration TEST getLevelsPre() Gets the measured luminance values from last calibration getLineariseMethod() Gets the min,max,gamma values for the each gun getLumsPost() Gets the measured luminance values from last calibration TEST getLumsPre() Gets the measured luminance values from last calibration getMeanLum() getNotes() Notes about the calibration getPsychopyVersion() getSizePix() Returns the size of the current calibration in pixels, or None if not dened getSpectra() Gets the wavelength values from the last spectrometer measurement (if available) usage: nm, power = monitor.getSpectra() getUseBits() Was this calibration carried out witha a bits++ box getWidth() Of the viewable screen in cm, or None if not known lineariseLums(desiredLums, newInterpolators=False, overrideGamma=None) lums should be uncalibrated luminance values (e.g. a linear ramp) ranging 0:1 newCalib(calibName=None, width=None, distance=None, gamma=None, notes=None, useBits=False, verbose=True) create a new (empty) calibration for this monitor and makes this the current calibration saveMon() saves the current dict of calibs to disk setCalibDate(date=None) Sets the calibration to a given date/time or to the current date/time if none given. (Also returns the date as set) setCurrent(calibration=-1) Sets the current calibration for this monitor. Note that a single le can hold multiple calibrations each stored under a different key (the date it was taken) The argument is either a string (naming the calib) or an integer eg:
127
myMon.setCurrentmainCalib) fetches the calibration named mainCalib calibName = myMon.setCurrent(0) fetches the rst calibration (alphabetically) for this monitor calibName = myMon.setCurrent(-1) fetches the last alphabetical calib for this monitor (this is default) If default names are used for calibs (ie date/time stamp) then this will import the most recent. setDKL_RGB(dkl_rgb) sets the DKL->RGB conversion matrix for a chromatically calibrated monitor (matrix is a 3x3 num array). setDistance(distance) To the screen (cm) setGamma(gamma) Sets the gamma value(s) for the monitor. This only uses a single gamma value for the three guns, which is fairly approximate. Better to use setGammaGrid (which uses one gamma value for each gun) setGammaGrid(gammaGrid) Sets the min,max,gamma values for the each gun setLMS_RGB(lms_rgb) sets the LMS->RGB conversion matrix for a chromatically calibrated monitor (matrix is a 3x3 num array). setLevelsPost(levels) Sets the last set of luminance values measured AFTER calibration setLevelsPre(levels) Sets the last set of luminance values measured during calibration setLineariseMethod(method) Sets the method for linearising 0 uses y=a+(bx)^gamma 1 uses y=(a+bx)^gamma 2 uses linear interpolation over the curve setLumsPost(lums) Sets the last set of luminance values measured AFTER calibration setLumsPre(lums) Sets the last set of luminance values measured during calibration setMeanLum(meanLum) Records the mean luminance (for reference only) setNotes(notes) For you to store notes about the calibration setPsychopyVersion(version) setSizePix(pixels) setSpectra(nm, rgb) sets the phosphor spectra measured by the spectrometer setUseBits(usebits) setWidth(width) Of the viewable screen (cm)
128
13.11.2 GammaCalculator
class psychopy.monitors.GammaCalculator(inputs=[], lums=[], gamma=None, bitsIN=8, bitsOUT=8, eq=1) Class for managing gamma tables Parameters: inputs (required)= values at which you measured screen luminance either in range 0.0:1.0, or range 0:255. Should include the min and max of the monitor Then give EITHER lums or gamma: lums = measured luminance at given input levels gamma = your own gamma value (single oat) bitsIN = number of values in your lookup table bitsOUT = number of bits in the DACs myTable.gammaModel myTable.gamma fitGammaErrFun(params, x, y, minLum, maxLum) Provides an error function for tting gamma function (used by tGammaFun) fitGammaFun(x, y) Fits a gamma function to the monitor calibration data. Parameters: -xVals are the monitor look-up-table vals (either 0-255 or 0.0-1.0) -yVals are the measured luminances from a photometer/spectrometer
13.11.3 getAllMonitors()
psychopy.monitors.getAllMonitors() Find the names of all monitors for which calibration les exist
13.11.4 findPR650()
psychopy.monitors.findPR650(ports=None) DEPRECATED (as of v.1.60.01). Use psychopy.hardware.findPhotometer() instead, which nds a wider range of devices
13.11.5 getLumSeriesPR650()
psychopy.monitors.getLumSeriesPR650(lumLevels=8, winSize=(800, 600), monitor=None, gamma=1.0, allGuns=True, useBits=False, autoMode=auto, stimSize=0.29999999999999999, photometer=COM1) DEPRECATED (since v1.60.01): Use pscyhopy.monitors.getLumSeries() instead
129
13.11.6 getRGBspectra()
psychopy.monitors.getRGBspectra(stimSize=0.29999999999999999, winSize=(800, 600), photometer=COM1) usage: getRGBspectra(stimSize=0.3, winSize=(800,600), photometer=COM1) Params photometer could be a photometer object or a serial port name on which a photometer might be found (not recommended)
13.11.7 gammaFun()
psychopy.monitors.gammaFun(xx, minLum, maxLum, gamma, eq=1, a=None, b=None, k=None) Returns gamma-transformed luminance values. y = gammaFun(x, minLum, maxLum, gamma) a and b are calculated directly from minLum, maxLum, gamma Parameters: xx are the input values (range 0-255 or 0.0-1.0) minLum = the minimum luminance of your monitor maxLum = the maximum luminance of your monitor (for this gun) gamma = the value of gamma (for this gun)
13.11.8 gammaInvFun()
psychopy.monitors.gammaInvFun(yy, minLum, maxLum, gamma, b=None, eq=1) Returns inverse gamma function for desired luminance values. x = gammaInvFun(y, minLum, maxLum, gamma) a and b are calculated directly from minLum, maxLum, gamma Parameters: xx are the input values (range 0-255 or 0.0-1.0) minLum = the minimum luminance of your monitor maxLum = the maximum luminance of your monitor (for this gun) gamma = the value of gamma (for this gun) eq determines the gamma equation used; eq==1[default]: yy = a + (b*xx)**gamma eq==2: yy = (a + b*xx)**gamma
13.11.9 makeDKL2RGB()
psychopy.monitors.makeDKL2RGB(nm, powerRGB) creates a 3x3 DKL->RGB conversion matrix from the spectral input powers
13.11.10 makeLMS2RGB()
psychopy.monitors.makeLMS2RGB(nm, powerRGB) Creates a 3x3 LMS->RGB conversion matrix from the spectral input powers
130
psychopy.parallel.setPin(pinNumber, state) Set a desired pin to be high(1) or low(0). Only pins 2-9 (incl) are normally used for data output:
parallel.setPin(3, 1)#sets pin 3 high parallel.setPin(3, 0)#sets pin 3 low
psychopy.parallel.setPortAddress(address=888) Set the memory address of your parallel port, to be used in subsequent commands common port addresses:
LPT1 = 0x0378 or 0x03BC LPT2 = 0x0278 or 0x0378 LPT3 = 0x0278
131
import serial ser = serial.Serial(0, 19200, timeout=1) # open first serial port #ser = serial.Serial(/dev/ttyS1, 19200, timeout=1)#or something like this for mac/linux machines ser.write(someCommand) line = ser.readline() # read a \n terminated line ser.close()
Ports are fully congurable with all the options you would expect of RS232 communications. http://pyserial.sourceforge.net for further details and documentation.
See
pyserial is packaged in the Standalone (win and mac distributions), for manual installations you should install this yourself.
132
play(fromStart=True) Starts playing the sound on an available channel. If no sound channels are available, it will not play and return None. This runs off a separate thread i.e. your code wont wait for the sound to nish before continuing. You need to use a psychopy.core.wait() command if you want things to pause. If you call play() whiles something is already playing the sounds will be played over each other. setVolume(newVol) Sets the current volume of the sound (0.0:1.0) stop() Stops the sound immediately For those that prefer Epydoc formatted API, that is also available here
133
134
CHAPTER
FOURTEEN
FOR DEVELOPERS
There is a separate mailing list to discuss development ideas and issues. For developers the best way to use PsychoPy is to install a version to your own copy of python (preferably 2.6 but 2.5 is OK). Make sure you have all the Dependencies, including the extra recommendedPackages for developers. Dont install PsychoPy. Instead fetch a copy of the git repository and add this to the python path using a .pth le. Other users of the computer might have their own standalone versions installed without your repository version touching them.
14.1.1 Workow
The use of git and the following workow allows people to contribute changes that can easily be incorporated back into the project, while (hopefully) maintaining order and consistency in the code. All changes should be tracked and reversible. Create a fork of the central psychopy/psychopy repository Create a local clone of that fork For small changes make the changes directly in the master branch push back to your fork submit a pull request to the central repository For substantial changes (new features) create a branch when nished run unit tests when the unit tests pass merge changes back into the master branch submit a pull request to the central repository
135
The last line connects your copy (with read access) to the central server so you can easily fetch any updates to the central repository.
From the GUI you can select (or stage in git terminology) the les that you want to include in this particular commit and give it a message. Give a clear summary of the changes for the rst line. You can add more details about the changes on lower lines if needed. If you have internet access then you could also push your changes back up to your fork (which is called your origin by default), either by pressing the push button in the GUI or by closing that and typing:
$ git push
136
You can push your new branch back to your fork (origin) with:
$ git push origin feature-somethingNew
Once youve folded your new code back into your master and pushed it back to your github fork then its time to Share your improvement with others.
and at the end of the experiment. In addition, you may need to sacrice some complexity in order to keep things streamlined enough for a Builder. Your new Component class (in newcomp.py) will probably inherit from either BaseComponent (_base.py) or VisualComponent (_visual.py). You may need to rewrite some or all some of these methods, to override default behavior.:
class NewcompComponent(BaseComponent): # or (VisualComponent) def __init__(<lots of stuff>): def writeInitCode(self, buff): def writeRoutineStartCode(self, buff): def writeFrameCode(self, buff): def writeRoutineEndCode(self, buff):
Note that while writing a new Component, if theres a syntax error in newcomp.py, the whole app (psychopy) is likely to fail to start (because Components are auto-detected and loaded). In addition, you may need to edit settings.py, which writes out the set-up code for the whole experiment (e.g., to dene the window). For example, this was necessary for ApertureComponent, to pass allowStencil=True to the window creation. Your new Component writes code into a buffer that becomes an executable python le, xxx_lastrun.py (where xxx is whatever the experimenter species when saving from the builder, xxx.psyexp). You will do a bunch of this kind of call in your newcomp.py le:
buff.writeIndented(your_python_syntax_string_here)
You have to manage the indentation level of the output code, see experiment.IndentingBuffer(). xxx_lastrun.py is the le that gets built when you run xxx.psyexp from the builder. So you will want to look at xxx_lastrun.py frequently when developing your component. There are several internal variables (er, names of python objects) that have a specic, hardcoded meaning within xxx_lastrun.py. You can expect the following to be there, and they should only be used in the original way (or something will break for the end-user, likely in a mysterious way):
win = the window t = time within the trial loop, referenced to trialClock x, y = mouse coordinates, but only if the experimenter uses a mouse component
Handling of variable names is under active development, so this list may well be out of date. (If so, you might consider updating it or posting a note to psychopy-dev.) Preliminary testing suggests that there are 600-ish names from numpy or numpy.random, plus the following:
[KeyResponse, __builtins__, __doc__, __file__, __name__, __package__, buttons, core,
Yet other names get derived from user-entered names, like trials > thisTrial. self.params is a key construct that you build up in __init__. You need name, startTime, duration, and several other params to be dened or you get errors. name should be of type code. To indicate that a param should be considered as an advanced feature, add it to the list self.params[advancedParams]. The the GUI shown to the experimenter will initially hides it as an option. Nice, easy. During development, I found it helpful at times to save the params into the xxx_lastrun.py le as comments, so I could see what was happening:
def writeInitCode(self,buff): # for debugging during Component development: buff.writeIndented("# self.params for aperture:\n") for p in self.params.keys():
138
try: buff.writeIndented("# %s: %s <type %s>\n" % (p, self.params[p].val, self.params[p].valTy except: pass
A lot more detail can be infered from Jons code. Making things loop-compatible looks interesting see keyboard.py for an example, especially code for saving data at the end. Gotchas: param[].val : If you have a boolean variable (e.g., my_ag) as one of your params, note that self.param[my_ag] is always True (the param exists > True). So you almost always want self.param[my_ag].val.
139
140
CHAPTER
FIFTEEN
15.1 Parameters
Many of the nodes described within this xml description of the experiment contain Param entries, representing different parameters of that Component. All parameter nodes have a name property and a val property. Most also have a valType property, which can take values bool, code, str and an updates property that species whether this parameter is changing during the experiment and, if so, whether it changes every frame (of the monitor) or every repeat (of the Routine).
15.2 Settings
The Settings node contains a number of parameters that, in PsychoPy, would normally be set in the Experiment settings dialog, such as the monitor to be used. This node contains a number of Parameters that map onto the entries in that dialog.
141
15.3 Routines
This node provides a sequence of xml child nodes, each of which describes a Routine. Each Routine contains a number of children, each specifying a Component, such as a stimulus or response collecting device. In the Builder view, the Routines obviously show up as different tabs in the main window and the Components show up as tracks within that tab.
15.4 Components
Each Component is represented in the .psyexp le as a set of parameters, corresponding to the entries in the appropriate component dialog box, that completely describe how and when the stimulus should be presented or how and when the input device should be read from. Different Components have slightly different nodes in the xml representation which give rise to different sets of parameters. For instance the TextComponent nodes has parameters such as colour and font, whereas the KeyboardComponent node has parameters such as forceEndTrial and correctIf.
15.5 Flow
The Flow node is rather more simple. Its children simply specify objects that occur in a particular order in time. A Routine described in this ow must exist in the list of Routines, since this is where it is fully described. One Routine can occur once, more than once or not at all in the Flow. The other children that can occur in a Flow are LoopInitiators and LoopTerminators which specify the start and endpoints of a loop. All loops must have exactly one initiator and one terminator.
15.6 Names
For the experiment to generate valid PsychoPy code the name parameters of all objects (Components, Loops and Routines) must be unique and contain no spaces. That is, an experiment can not have two different Routines called trial, nor even a Routine called trial and a Loop called trial. The Parameter names belonging to each Component (or the Settings node) must be unique within that Component, but can be identical to parameters of other Components or can match the Component name themselves. A TextComponent should not, for example, have multiple pos parameters, but other Components generally will, and a Routine called pos would also be also permissible.
<PsychoPy2experiment version="1.50.04" encoding="utf-8"> <Settings> <Param name="Monitor" val="testMonitor" valType="str" updates="None"/> <Param name="Window size (pixels)" val="[1024, 768]" valType="code" updates="None"/> <Param name="Full-screen window" val="True" valType="bool" updates="None"/> <Param name="Save log file" val="True" valType="bool" updates="None"/> <Param name="Experiment info" val="{participant:s_001, session:001}" valType="code" updates <Param name="Show info dlg" val="True" valType="bool" updates="None"/> <Param name="logging level" val="warning" valType="code" updates="None"/> <Param name="Units" val="norm" valType="str" updates="None"/> <Param name="Screen" val="1" valType="num" updates="None"/> </Settings> <Routines> <Routine name="trial"> <TextComponent name="word"> <Param name="name" val="word" valType="code" updates="constant"/>
142
<Param name="text" val="thisTrial.text" valType="code" updates="set every repeat"/> <Param name="colour" val="thisTrial.rgb" valType="code" updates="set every repeat"/> <Param name="ori" val="0" valType="code" updates="constant"/> <Param name="pos" val="[0, 0]" valType="code" updates="constant"/> <Param name="times" val="[0.5,2.0]" valType="code" updates="constant"/> <Param name="letterHeight" val="0.2" valType="code" updates="constant"/> <Param name="colourSpace" val="rgb" valType="code" updates="constant"/> <Param name="units" val="window units" valType="str" updates="None"/> <Param name="font" val="Arial" valType="str" updates="constant"/> </TextComponent> <KeyboardComponent name="resp"> <Param name="storeCorrect" val="True" valType="bool" updates="constant"/> <Param name="name" val="resp" valType="code" updates="None"/> <Param name="forceEndTrial" val="True" valType="bool" updates="constant"/> <Param name="times" val="[0.5,2.0]" valType="code" updates="constant"/> <Param name="allowedKeys" val="[1,2,3]" valType="code" updates="constant"/> <Param name="storeResponseTime" val="True" valType="bool" updates="constant"/> <Param name="correctIf" val="resp.keys==str(thisTrial.corrAns)" valType="code" updates="const <Param name="store" val="last key" valType="str" updates="constant"/> </KeyboardComponent> </Routine> <Routine name="instruct"> <TextComponent name="instrText"> <Param name="name" val="instrText" valType="code" updates="constant"/> <Param name="text" val=""Please press; 1 for red ink, 2 for green ink 3 for <Param name="colour" val="[1, 1, 1]" valType="code" updates="constant"/> <Param name="ori" val="0" valType="code" updates="constant"/> <Param name="pos" val="[0, 0]" valType="code" updates="constant"/> <Param name="times" val="[0, 10000]" valType="code" updates="constant"/> <Param name="letterHeight" val="0.1" valType="code" updates="constant"/> <Param name="colourSpace" val="rgb" valType="code" updates="constant"/> <Param name="units" val="window units" valType="str" updates="None"/> <Param name="font" val="Arial" valType="str" updates="constant"/> </TextComponent> <KeyboardComponent name="ready"> <Param name="storeCorrect" val="False" valType="bool" updates="constant"/> <Param name="name" val="ready" valType="code" updates="None"/> <Param name="forceEndTrial" val="True" valType="bool" updates="constant"/> <Param name="times" val="[0, 10000]" valType="code" updates="constant"/> <Param name="allowedKeys" val="" valType="code" updates="constant"/> <Param name="storeResponseTime" val="False" valType="bool" updates="constant"/> <Param name="correctIf" val="resp.keys==str(thisTrial.corrAns)" valType="code" updates="const <Param name="store" val="last key" valType="str" updates="constant"/> </KeyboardComponent> </Routine> <Routine name="thanks"> <TextComponent name="thanksText"> <Param name="name" val="thanksText" valType="code" updates="constant"/> <Param name="text" val=""Thanks!"" valType="code" updates="constant"/> <Param name="colour" val="[1, 1, 1]" valType="code" updates="constant"/> <Param name="ori" val="0" valType="code" updates="constant"/> <Param name="pos" val="[0, 0]" valType="code" updates="constant"/> <Param name="times" val="[1.0, 2.0]" valType="code" updates="constant"/> <Param name="letterHeight" val="0.2" valType="code" updates="constant"/> <Param name="colourSpace" val="rgb" valType="code" updates="constant"/> <Param name="units" val="window units" valType="str" updates="None"/> <Param name="font" val="arial" valType="str" updates="constant"/> </TextComponent>
15.6. Names
143
</Routine> </Routines> <Flow> <Routine name="instruct"/> <LoopInitiator loopType="TrialHandler" name="trials"> <Param name="endPoints" val="[0, 1]" valType="num" updates="None"/> <Param name="name" val="trials" valType="code" updates="None"/> <Param name="loopType" val="random" valType="str" updates="None"/> <Param name="nReps" val="5" valType="num" updates="None"/> <Param name="trialList" val="[{text: red, rgb: [1, -1, -1], congruent: 1, corrAns: 1} <Param name="trialListFile" val="/Users/jwp...troop/trialTypes.csv" valType="str" updates="None </LoopInitiator> <Routine name="trial"/> <LoopTerminator name="trials"/> <Routine name="thanks"/> </Flow> </PsychoPy2experiment>
144
CHAPTER
SIXTEEN
GLOSSARY
Adaptive staircase An experimental method whereby the choice of stimulus parameters is not pre-determined but based on previous responses. For example, the difculty of a task might be varied trial-to-trial based on the participants responses. These are often used to nd psychophysical thresholds. Contrast this with the method of constants. CRT [Cathode Ray Tube] Traditional computer monitor (rather than an LCD or plasma at screen). csv [comma-separated value les] Type of basic text le with comma-separated values. This type of le can be opened with most spreadsheet packages (e.g. MS Excel) for easy reading and manipulation. Method of constants An experimental method whereby the parameters controlling trials are predetermined at the beginning of the experiment, rather than determined on each trial. For example, a stimulus may be presented for 3 pre-determined time periods (100, 200, 300ms) on different trials, and then repeated a number of times. The order of presentation of the different conditions can be randomised or sequential (in a xed order). Contrast this method with the adaptive staircase. VBI [Vertical Blank Interval] (aka the Vertical Retrace, or Vertical Blank, VBL). The period in-between video frames and can be used for synchronising purposes. On a CRT display the screen is black during the VBI and the display beam is returned to the top of the display. VBI blocking The setting whereby all functions are synced to the VBI. After a call to psychopy.visual.Window.flip() nothing else occurs until the VBI has occurred. This is optimal and allows very precise timing, because as soon as the ip has occured a very precise time interval is known to have occured. VBI syncing (aka vsync) The setting whereby the video drawing commands are synced to the VBI. When psychopy.visual.Window.ip() is called, the current back buffer (where drawing commands are being executed) will be held and drawn on the next VBI. This does not necessarily entail VBI blocking (because the system may return and continue executing commands) but does guarantee a xed interval between frames being drawn. xlsx [Excel OpenXML le format] A spreadsheet data format developed by Microsoft but with an open (published format). This is the native le format for Excel (2007 or later) and can be opened by most modern spreadsheet applications including OpenOfce (3.0+), google docs, Apple iWork 08.
145
146
CHAPTER
SEVENTEEN
INDICES
genindex modindex search A pdf copy of the current documentation is available at: http://www.psychopy.org/PsychoPyManual.pdf
147
148
p
psychopy.core, 65 psychopy.data, 93 psychopy.event, 105 psychopy.filters, 107 psychopy.hardware.crs, 113 psychopy.hardware.egi, 114 psychopy.hardware.forp, 114 psychopy.hardware.ioLabs, 114 psychopy.hardware.minolta, 115 psychopy.hardware.pr, 117 psychopy.info, 120 psychopy.log, 121 psychopy.misc, 124 psychopy.parallel, 131 psychopy.sound, 132 psychopy.visual, 65 pylink, 120 pyxid, 112
149
150
INDEX
A
Adaptive staircase, 145 addData() (psychopy.data.MultiStairHandler method), 98 addData() (psychopy.data.QuestHandler method), 101 addData() (psychopy.data.StairHandler method), 95 addData() (psychopy.data.TrialHandler method), 93 addField() (psychopy.gui.Dlg method), 110 addFixedField() (psychopy.gui.Dlg method), 110 addLevel() (in module psychopy.log), 122 addText() (psychopy.gui.Dlg method), 110 Aperture (class in psychopy.visual), 92 array2image() (in module psychopy.misc), 124
clearTextures() (psychopy.visual.RadialStim method), 82 clickReset() (psychopy.event.Mouse method), 105 Clock (class in psychopy.core), 65 close() (psychopy.visual.Window method), 67 cm2deg() (in module psychopy.misc), 124 cm2pix() (in module psychopy.misc), 124 confInterval() (psychopy.data.QuestHandler method), 101 conv2d() (in module psychopy.lters), 108 copyCalib() (psychopy.monitors.Monitor method), 126 critical() (in module psychopy.log), 122 CRT, 145 csv, 145
B
BitsBox (class in psychopy.hardware.crs), 113 bootStraps() (in module psychopy.data), 105 butter2d_bp() (in module psychopy.lters), 107 butter2d_hp() (in module psychopy.lters), 107 butter2d_lp() (in module psychopy.lters), 107 butter2d_lp_elliptic() (in module psychopy.lters), 108 ButtonBox (class in psychopy.hardware.forp), 114
D
data() (in module psychopy.log), 122 debug() (in module psychopy.log), 122 deg2cm() (in module psychopy.misc), 124 deg2pix() (in module psychopy.misc), 124 delCalib() (psychopy.monitors.Monitor method), 126 disable() (psychopy.visual.Aperture method), 93 Dlg (class in psychopy.gui), 110 DlgFromDict (class in psychopy.gui), 109 DotStim (class in psychopy.visual), 86 draw() (psychopy.visual.DotStim method), 88 draw() (psychopy.visual.ElementArrayStim method), 85 draw() (psychopy.visual.MovieStim method), 77 draw() (psychopy.visual.PatchStim method), 71 draw() (psychopy.visual.RadialStim method), 82 draw() (psychopy.visual.RatingScale method), 92 draw() (psychopy.visual.ShapeStim method), 80 draw() (psychopy.visual.SimpleImageStim method), 73 draw() (psychopy.visual.TextStim method), 74
C
calculateNextIntensity() (psychopy.data.QuestHandler method), 101 calculateNextIntensity() (psychopy.data.StairHandler method), 95 cart2pol() (in module psychopy.misc), 124 checkOK() (psychopy.hardware.minolta.LS100 method), 116 clear_response_queue() (pyxid.ResponseDevice method), 112 clearBuffer() (psychopy.hardware.forp.ButtonBox method), 114 clearBuffer() (psychopy.visual.Window method), 66 clearEvents() (in module psychopy.event), 106 clearMemory() (psychopy.hardware.minolta.LS100 method), 116 clearTextures() (psychopy.visual.ElementArrayStim method), 85 clearTextures() (psychopy.visual.PatchStim method), 71
E
ElementArrayStim (class in psychopy.visual), 84 enable() (psychopy.visual.Aperture method), 93 endRemoteMode() (psychopy.hardware.pr.PR655 method), 118 error() (in module psychopy.log), 122 eval() (psychopy.data.FitCumNormal method), 104 eval() (psychopy.data.FitLogistic method), 103 151
eval() (psychopy.data.FitNakaRushton method), 103 eval() (psychopy.data.FitWeibull method), 103 exp() (in module psychopy.log), 122 extendArr() (in module psychopy.misc), 124
F
fadeOut() (psychopy.sound.SoundPygame method), 132 fatal() (in module psychopy.log), 123 leOpenDlg (class in psychopy.gui), 110 leSaveDlg (class in psychopy.gui), 111 ndPhotometer() (in module psychopy.hardware), 120 ndPR650() (in module psychopy.monitors), 129 FitCumNormal (class in psychopy.data), 104 tGammaErrFun() (psychopy.monitors.GammaCalculator method), 129 tGammaFun() (psychopy.monitors.GammaCalculator method), 129 FitLogistic (class in psychopy.data), 103 FitNakaRushton (class in psychopy.data), 103 FitWeibull (class in psychopy.data), 103 ip() (psychopy.visual.Window method), 67 oat_uint16() (in module psychopy.misc), 124 oat_uint8() (in module psychopy.misc), 124 ush() (in module psychopy.log), 123 fps() (psychopy.visual.Window method), 67 fromFile() (in module psychopy.misc), 124 functionFromStaircase() (in module psychopy.data), 104
G
GammaCalculator (class in psychopy.monitors), 129 gammaFun() (in module psychopy.monitors), 130 gammaInvFun() (in module psychopy.monitors), 130 gammaIsDefault() (psychopy.monitors.Monitor method), 126 get_next_response() (pyxid.ResponseDevice method), 112 get_xid_device() (in module pyxid), 112 get_xid_devices() (in module pyxid), 112 getAllMonitors() (in module psychopy.monitors), 129 getCalibDate() (psychopy.monitors.Monitor method), 126 getDeviceSN() (psychopy.hardware.pr.PR655 method), 118 getDeviceType() (psychopy.hardware.pr.PR655 method), 118 getDistance() (psychopy.monitors.Monitor method), 126 getDKL_RGB() (psychopy.monitors.Monitor method), 126 getDuration() (psychopy.sound.SoundPygame method), 132 getEvents() (psychopy.hardware.forp.ButtonBox method), 114 getGamma() (psychopy.monitors.Monitor method), 126 152
getGammaGrid() (psychopy.monitors.Monitor method), 126 getKeys() (in module psychopy.event), 106 getLastColorTemp() (psychopy.hardware.pr.PR655 method), 118 getLastLum() (psychopy.hardware.pr.PR650 method), 117 getLastSpectrum() (psychopy.hardware.pr.PR650 method), 117 getLastSpectrum() (psychopy.hardware.pr.PR655 method), 118 getLastTristim() (psychopy.hardware.pr.PR655 method), 119 getLastUV() (psychopy.hardware.pr.PR655 method), 119 getLastXY() (psychopy.hardware.pr.PR655 method), 119 getLevel() (in module psychopy.log), 123 getLevelsPost() (psychopy.monitors.Monitor method), 127 getLevelsPre() (psychopy.monitors.Monitor method), 127 getLineariseMethod() (psychopy.monitors.Monitor method), 127 getLMS_RGB() (psychopy.monitors.Monitor method), 127 getLum() (psychopy.hardware.minolta.LS100 method), 116 getLum() (psychopy.hardware.pr.PR650 method), 117 getLumSeriesPR650() (in module psychopy.monitors), 129 getLumsPost() (psychopy.monitors.Monitor method), 127 getLumsPre() (psychopy.monitors.Monitor method), 127 getMeanLum() (psychopy.monitors.Monitor method), 127 getMovieFrame() (psychopy.visual.Window method), 67 getNotes() (psychopy.monitors.Monitor method), 127 getPos() (psychopy.event.Mouse method), 105 getPressed() (psychopy.event.Mouse method), 105 getPsychopyVersion() (psychopy.monitors.Monitor method), 127 getRating() (psychopy.visual.RatingScale method), 92 getRel() (psychopy.event.Mouse method), 106 getRGBspectra() (in module psychopy.monitors), 130 getRMScontrast() (in module psychopy.lters), 108 getRT() (psychopy.visual.RatingScale method), 92 getSizePix() (psychopy.monitors.Monitor method), 127 getSpectra() (psychopy.monitors.Monitor method), 127 getSpectrum() (psychopy.hardware.pr.PR650 method), 117 getTime() (psychopy.core.Clock method), 65 getUniqueEvents() (psychopy.hardware.forp.ButtonBox method), 114 getUseBits() (psychopy.monitors.Monitor method), 127 getVisible() (psychopy.event.Mouse method), 106 getVolume() (psychopy.sound.SoundPygame method), 132
Index
I
image2array() (in module psychopy.misc), 124 imfft() (in module psychopy.lters), 108 imifft() (in module psychopy.lters), 108 importData() (psychopy.data.QuestHandler method), 101 importTrialList() (in module psychopy.data), 104 incTrials() (psychopy.data.QuestHandler method), 101 info() (in module psychopy.log), 123 init_device() (pyxid.XidDevice method), 113 inverse() (psychopy.data.FitCumNormal method), 104 inverse() (psychopy.data.FitLogistic method), 103 inverse() (psychopy.data.FitNakaRushton method), 103 inverse() (psychopy.data.FitWeibull method), 103
next() (psychopy.data.StairHandler method), 96 next() (psychopy.data.TrialHandler method), 93 nextTrial() (psychopy.data.QuestHandler method), 101 nextTrial() (psychopy.data.StairHandler method), 96 nextTrial() (psychopy.data.TrialHandler method), 93
parseSpectrumOutput() (psychopy.hardware.pr.PR650 method), 118 parseSpectrumOutput() (psychopy.hardware.pr.PR655 method), 119 PatchStim (class in psychopy.visual), 69 pause() (psychopy.visual.MovieStim method), 77 pix2cm() (in module psychopy.misc), 125 pix2deg() (in module psychopy.misc), 125 play() (psychopy.sound.SoundPygame method), 132 play() (psychopy.visual.MovieStim method), 77 plotFrameIntervals() (in module psychopy.misc), 125 L lineariseLums() (psychopy.monitors.Monitor method), pol2cart() (in module psychopy.misc), 125 poll_for_response() (pyxid.ResponseDevice method), 127 112 loadMovie() (psychopy.visual.MovieStim method), 77 PR650 (class in psychopy.hardware.pr), 117 log() (in module psychopy.log), 123 PR655 (class in psychopy.hardware.pr), 118 LogFile (class in psychopy.log), 122 printAsText() (psychopy.data.MultiStairHandler method), logOnFlip() (psychopy.visual.Window method), 67 98 LS100 (class in psychopy.hardware.minolta), 115 printAsText() (psychopy.data.QuestHandler method), 101 printAsText() (psychopy.data.StairHandler method), 96 M printAsText() (psychopy.data.TrialHandler method), 94 makeDKL2RGB() (in module psychopy.monitors), 130 psychopy.core (module), 65 makeGauss() (in module psychopy.lters), 108 psychopy.data (module), 93 makeGrating() (in module psychopy.lters), 108 psychopy.event (module), 105 makeImageAuto() (in module psychopy.misc), 124 psychopy.lters (module), 107 makeLMS2RGB() (in module psychopy.monitors), 130 psychopy.hardware.crs (module), 113 makeMask() (in module psychopy.lters), 109 psychopy.hardware.egi (module), 114 makeRadialMatrix() (in module psychopy.lters), 109 psychopy.hardware.forp (module), 114 maskMatrix() (in module psychopy.lters), 109 psychopy.hardware.ioLabs (module), 114 mean() (psychopy.data.QuestHandler method), 101 psychopy.hardware.minolta (module), 115 measure() (psychopy.hardware.minolta.LS100 method), psychopy.hardware.pr (module), 117 116 psychopy.info (module), 120 measure() (psychopy.hardware.pr.PR650 method), 117 psychopy.log (module), 121 measure() (psychopy.hardware.pr.PR655 method), 119 psychopy.misc (module), 124 mergeFolder() (in module psychopy.misc), 124 psychopy.parallel (module), 131 Method of constants, 145 psychopy.sound (module), 132 mode() (psychopy.data.QuestHandler method), 101 psychopy.visual (module), 65 Monitor (class in psychopy.monitors), 126 pylink (module), 120 Mouse (class in psychopy.event), 105 pyxid (module), 112 mouseMoved() (psychopy.event.Mouse method), 106 MovieStim (class in psychopy.visual), 76 Q MultiStairHandler (class in psychopy.data), 97 quantile() (psychopy.data.QuestHandler method), 102 QuestHandler (class in psychopy.data), 99 N quit() (in module psychopy.core), 65 newCalib() (psychopy.monitors.Monitor method), 127 next() (psychopy.data.MultiStairHandler method), 98 R next() (psychopy.data.QuestHandler method), 101 RadialStim (class in psychopy.visual), 81 Index 153
radians() (in module psychopy.misc), 125 RatingScale (class in psychopy.visual), 90 ratioRange() (in module psychopy.misc), 125 readPin() (in module psychopy.parallel), 131 reset() (psychopy.core.Clock method), 65 reset() (psychopy.visual.RatingScale method), 92 response_queue_size() (pyxid.ResponseDevice method), 112 ResponseDevice (class in pyxid), 112 RunTimeInfo (class in psychopy.info), 120
S
saveAsExcel() (psychopy.data.MultiStairHandler method), 98 saveAsExcel() (psychopy.data.QuestHandler method), 102 saveAsExcel() (psychopy.data.StairHandler method), 96 saveAsExcel() (psychopy.data.TrialHandler method), 94 saveAsPickle() (psychopy.data.MultiStairHandler method), 99 saveAsPickle() (psychopy.data.QuestHandler method), 102 saveAsPickle() (psychopy.data.StairHandler method), 96 saveAsPickle() (psychopy.data.TrialHandler method), 94 saveAsText() (psychopy.data.MultiStairHandler method), 99 saveAsText() (psychopy.data.QuestHandler method), 102 saveAsText() (psychopy.data.StairHandler method), 97 saveAsText() (psychopy.data.TrialHandler method), 94 saveFrameIntervals() (psychopy.visual.Window method), 67 saveMon() (psychopy.monitors.Monitor method), 127 saveMovieFrames() (psychopy.visual.Window method), 67 sd() (psychopy.data.QuestHandler method), 102 seek() (psychopy.visual.MovieStim method), 77 sendMessage() (psychopy.hardware.minolta.LS100 method), 116 sendMessage() (psychopy.hardware.pr.PR650 method), 118 sendMessage() (psychopy.hardware.pr.PR655 method), 119 set() (psychopy.visual.DotStim method), 88 setAngularCycles() (psychopy.visual.RadialStim method), 82 setAngularPhase() (psychopy.visual.RadialStim method), 82 setAutoDraw() (psychopy.visual.DotStim method), 88 setAutoDraw() (psychopy.visual.MovieStim method), 77 setAutoDraw() (psychopy.visual.PatchStim method), 71 setAutoDraw() (psychopy.visual.RadialStim method), 82 setAutoDraw() (psychopy.visual.ShapeStim method), 80 setAutoDraw() (psychopy.visual.TextStim method), 74 setAutoLog() (psychopy.visual.DotStim method), 88 154
setAutoLog() (psychopy.visual.MovieStim method), 77 setAutoLog() (psychopy.visual.PatchStim method), 71 setAutoLog() (psychopy.visual.RadialStim method), 82 setAutoLog() (psychopy.visual.ShapeStim method), 80 setAutoLog() (psychopy.visual.TextStim method), 74 setCalibDate() (psychopy.monitors.Monitor method), 127 setColor() (psychopy.visual.DotStim method), 88 setColor() (psychopy.visual.MovieStim method), 77 setColor() (psychopy.visual.PatchStim method), 71 setColor() (psychopy.visual.RadialStim method), 82 setColor() (psychopy.visual.ShapeStim method), 80 setColor() (psychopy.visual.TextStim method), 74 setColor() (psychopy.visual.Window method), 68 setContr() (psychopy.visual.DotStim method), 89 setContr() (psychopy.visual.MovieStim method), 78 setContr() (psychopy.visual.PatchStim method), 72 setContr() (psychopy.visual.RadialStim method), 83 setContr() (psychopy.visual.ShapeStim method), 80 setContr() (psychopy.visual.TextStim method), 75 setContrast() (psychopy.hardware.crs.BitsBox method), 113 setContrast() (psychopy.visual.PatchStim method), 72 setContrast() (psychopy.visual.RadialStim method), 83 setContrs() (psychopy.visual.ElementArrayStim method), 85 setCurrent() (psychopy.monitors.Monitor method), 127 setData() (in module psychopy.parallel), 131 setDefaultClock() (in module psychopy.log), 123 setDepth() (psychopy.visual.DotStim method), 89 setDepth() (psychopy.visual.MovieStim method), 78 setDepth() (psychopy.visual.PatchStim method), 72 setDepth() (psychopy.visual.RadialStim method), 83 setDepth() (psychopy.visual.ShapeStim method), 80 setDepth() (psychopy.visual.SimpleImageStim method), 73 setDepth() (psychopy.visual.TextStim method), 75 setDir() (psychopy.visual.DotStim method), 89 setDistance() (psychopy.monitors.Monitor method), 128 setDKL() (psychopy.visual.DotStim method), 89 setDKL() (psychopy.visual.MovieStim method), 78 setDKL() (psychopy.visual.PatchStim method), 72 setDKL() (psychopy.visual.RadialStim method), 83 setDKL() (psychopy.visual.ShapeStim method), 80 setDKL() (psychopy.visual.TextStim method), 75 setDKL_RGB() (psychopy.monitors.Monitor method), 128 setFieldCoherence() (psychopy.visual.DotStim method), 89 setFieldPos() (psychopy.visual.DotStim method), 89 setFieldPos() (psychopy.visual.ElementArrayStim method), 85 setFieldSize() (psychopy.visual.ElementArrayStim method), 85 setFillColor() (psychopy.visual.ShapeStim method), 80
Index
setFillRGB() (psychopy.visual.ShapeStim method), 80 setFlipHoriz() (psychopy.visual.SimpleImageStim method), 73 setFlipVert() (psychopy.visual.SimpleImageStim method), 73 setFont() (psychopy.visual.TextStim method), 75 setGamma() (psychopy.hardware.crs.BitsBox method), 113 setGamma() (psychopy.monitors.Monitor method), 128 setGamma() (psychopy.visual.Window method), 68 setGammaGrid() (psychopy.monitors.Monitor method), 128 setHeight() (psychopy.visual.TextStim method), 75 setImage() (psychopy.visual.SimpleImageStim method), 73 setLevel() (psychopy.log.LogFile method), 122 setLevelsPost() (psychopy.monitors.Monitor method), 128 setLevelsPre() (psychopy.monitors.Monitor method), 128 setLineariseMethod() (psychopy.monitors.Monitor method), 128 setLineColor() (psychopy.visual.ShapeStim method), 80 setLineRGB() (psychopy.visual.ShapeStim method), 80 setLMS() (psychopy.visual.DotStim method), 89 setLMS() (psychopy.visual.MovieStim method), 78 setLMS() (psychopy.visual.PatchStim method), 72 setLMS() (psychopy.visual.RadialStim method), 83 setLMS() (psychopy.visual.ShapeStim method), 80 setLMS() (psychopy.visual.TextStim method), 75 setLMS_RGB() (psychopy.monitors.Monitor method), 128 setLumsPost() (psychopy.monitors.Monitor method), 128 setLumsPre() (psychopy.monitors.Monitor method), 128 setLUT() (psychopy.hardware.crs.BitsBox method), 113 setMask() (psychopy.visual.ElementArrayStim method), 85 setMask() (psychopy.visual.PatchStim method), 72 setMask() (psychopy.visual.RadialStim method), 83 setMaxAttempts() (psychopy.hardware.minolta.LS100 method), 116 setMeanLum() (psychopy.monitors.Monitor method), 128 setMode() (psychopy.hardware.minolta.LS100 method), 116 setMouseVisible() (psychopy.visual.Window method), 68 setMovie() (psychopy.visual.MovieStim method), 78 setNotes() (psychopy.monitors.Monitor method), 128 setOpacities() (psychopy.visual.ElementArrayStim method), 85 setOpacity() (psychopy.visual.DotStim method), 89 setOpacity() (psychopy.visual.MovieStim method), 78 setOpacity() (psychopy.visual.PatchStim method), 72 setOpacity() (psychopy.visual.RadialStim method), 83 setOpacity() (psychopy.visual.ShapeStim method), 80
setOpacity() (psychopy.visual.TextStim method), 76 setOri() (psychopy.visual.DotStim method), 89 setOri() (psychopy.visual.MovieStim method), 78 setOri() (psychopy.visual.PatchStim method), 72 setOri() (psychopy.visual.RadialStim method), 83 setOri() (psychopy.visual.ShapeStim method), 80 setOri() (psychopy.visual.TextStim method), 76 setOris() (psychopy.visual.ElementArrayStim method), 85 setPhase() (psychopy.visual.PatchStim method), 72 setPhase() (psychopy.visual.RadialStim method), 83 setPhases() (psychopy.visual.ElementArrayStim method), 85 setPin() (in module psychopy.parallel), 131 setPortAddress() (in module psychopy.parallel), 131 setPos() (psychopy.event.Mouse method), 106 setPos() (psychopy.visual.Aperture method), 93 setPos() (psychopy.visual.DotStim method), 89 setPos() (psychopy.visual.ElementArrayStim method), 85 setPos() (psychopy.visual.MovieStim method), 78 setPos() (psychopy.visual.PatchStim method), 72 setPos() (psychopy.visual.RadialStim method), 83 setPos() (psychopy.visual.ShapeStim method), 80 setPos() (psychopy.visual.SimpleImageStim method), 73 setPos() (psychopy.visual.TextStim method), 76 setPsychopyVersion() (psychopy.monitors.Monitor method), 128 setRadialCycles() (psychopy.visual.RadialStim method), 83 setRadialPhase() (psychopy.visual.RadialStim method), 84 setRecordFrameIntervals() (psychopy.visual.Window method), 69 setRGB() (psychopy.visual.DotStim method), 89 setRGB() (psychopy.visual.MovieStim method), 78 setRGB() (psychopy.visual.PatchStim method), 72 setRGB() (psychopy.visual.RadialStim method), 83 setRGB() (psychopy.visual.ShapeStim method), 80 setRGB() (psychopy.visual.TextStim method), 76 setRGB() (psychopy.visual.Window method), 69 setRgbs() (psychopy.visual.ElementArrayStim method), 85 setScale() (psychopy.visual.Window method), 69 setSF() (psychopy.visual.PatchStim method), 72 setSF() (psychopy.visual.RadialStim method), 84 setSfs() (psychopy.visual.ElementArrayStim method), 85 setSize() (psychopy.visual.Aperture method), 93 setSize() (psychopy.visual.DotStim method), 89 setSize() (psychopy.visual.MovieStim method), 78 setSize() (psychopy.visual.PatchStim method), 72 setSize() (psychopy.visual.RadialStim method), 84 setSize() (psychopy.visual.ShapeStim method), 80 setSize() (psychopy.visual.TextStim method), 76 setSizePix() (psychopy.monitors.Monitor method), 128
Index
155
setSizes() (psychopy.visual.ElementArrayStim method), 86 setSpectra() (psychopy.monitors.Monitor method), 128 setSpeed() (psychopy.visual.DotStim method), 89 setTex() (psychopy.visual.ElementArrayStim method), 86 setTex() (psychopy.visual.PatchStim method), 72 setTex() (psychopy.visual.RadialStim method), 84 setText() (psychopy.visual.TextStim method), 76 setUseBits() (psychopy.monitors.Monitor method), 128 setUseShaders() (psychopy.visual.DotStim method), 89 setUseShaders() (psychopy.visual.MovieStim method), 78 setUseShaders() (psychopy.visual.PatchStim method), 72 setUseShaders() (psychopy.visual.RadialStim method), 84 setUseShaders() (psychopy.visual.ShapeStim method), 81 setUseShaders() (psychopy.visual.SimpleImageStim method), 73 setUseShaders() (psychopy.visual.TextStim method), 76 setVertices() (psychopy.visual.ShapeStim method), 81 setVisible() (psychopy.event.Mouse method), 106 setVolume() (psychopy.sound.SoundPygame method), 133 setWidth() (psychopy.monitors.Monitor method), 128 setXYs() (psychopy.visual.ElementArrayStim method), 86 ShapeStim (class in psychopy.visual), 79 shellCall() (in module psychopy.core), 65 show() (psychopy.gui.Dlg method), 110 shufeArray() (in module psychopy.misc), 125 SimpleImageStim (class in psychopy.visual), 72 simulate() (psychopy.data.QuestHandler method), 102 SoundPygame (class in psychopy.sound), 132 StairHandler (class in psychopy.data), 95 startRemoteMode() (psychopy.hardware.pr.PR655 method), 119 stop() (psychopy.sound.SoundPygame method), 133
V
VBI, 145 VBI blocking, 145 VBI syncing, 145
W
wait() (in module psychopy.core), 65 waitKeys() (in module psychopy.event), 107 warn() (in module psychopy.log), 123 warning() (in module psychopy.log), 123 Window (class in psychopy.visual), 66 write() (psychopy.log.LogFile method), 122
X
XidDevice (class in pyxid), 112 xlsx, 145 xydist() (in module psychopy.event), 107
T
TextStim (class in psychopy.visual), 73 toFile() (in module psychopy.misc), 125 TrialHandler (class in psychopy.data), 93
U
uint8_oat() (in module psychopy.misc), 125 updataElementColors() chopy.visual.ElementArrayStim 86 updataTextureCoords() chopy.visual.ElementArrayStim 86 updateElementVertices() chopy.visual.ElementArrayStim 86 156 (psymethod), (psymethod), (psymethod),
Index