From Ace Centre projects
Ace Centre Project List
Ace Centre is a UK-based charity that provides services for the UK National Health Service (NHS) in Augmentative and Alternative Communication (AAC) in two regions. As part of this service, we assess for and identify Assistive Technology (AT) solutions for people with disabilities to enable access to technology and communication. We try to use existing solutions and devices, but sometimes these don’t exist. We often identify a need that does not have an existing solution, or has one that needs evolving or perhaps improving. Sometimes a solution existied in the past but it needs bringing up to date to use current technologies. We use an open design approach and publish our software and hardware solutions (https://github.com/acecentre). There are some amazing devices out there, but there’s still more to be done. This is our to do list!
Eye Blink Detection
There’s a need to detect eye blinks fast, reliably and in a range of conditions.
We work with a range of disabled individuals who cannot speak due to either a cognitive or physical disability. Take for example Paul: https://www.youtube.com/watch?v=3fxrVktVzn0 or Michael: https://www.youtube.com/watch?v=OMeml1r8hvE&list=PLWWQ5nlUD_tvlC03wNzyGc2pCC57Ehm7P&index=2
Both of these indviduals are using a range of solutions to help them communicate. What typically happens in conditions such as MND (ALS) is that the muscles continue to deteriorate across the body to the point that only the eyes have some motor control. But even then the person fatigues and may struggle to hold their eyes open. However voluntary blinking is possible.
Stephen Hawking famously used a form of blink detector (known as a switch) on his cheek. These kinds of switches are VERY fiddly to setup and need constant re-alignment. See a commercial example of such a device here: https://key2enable.com/produtos/a-blinx/. We rarely get these to work long term. Also bear in mind in adults with a deteriorating condition the amount they can open their eye varies throughout the day so what constitutes a voluntary blink may change.
See some more examples in particular of blinking in patients here: https://www.dropbox.com/s/8zxsadz8ym66szo/eyeBlinkDemo.mp4?dl=0 (NB: We demonstrate in this some “looking up” as well as a blink as sometimes with these light sensor detectors a look up is easier to detect on the eye than a blink. We also demonstrate a camera system and a accelerometer approach to eyebrow movement. Don’t get too distracted at these – just look at the range of eye movement.
A team have done a purely CV approach to this (see https://github.com/danetomseth/Blink1.5 or https://github.com/AbilitySpectrum/blink_based_aural_scanning_keyboard_with_Morse_code_option) but a CV approach is not that reliable – and these projects don’t always build. We could use an ML approach to this with a front facing camera. Bear in mind you have to deal with a lot of things when doing this:
- placement of the camera
- light levels
- obscuring of the eye.
What may be easier is to use a cheap USB endoscope (e.g. https://www.alibaba.com/product-detail/Adjustable-Endoscope-Camera-Flexible-IP67-Waterproof_1600146092589.html?spm=a2700.galleryofferlist.normal_offer.d_title.3c3d3bf7mJuIdk) – These often have a light on the end – and we can place these in front of the eye on a pair of glasses or a headmount. This may make the task easier. We’ll need:
- Probably Need a “calibration” phase.
- An area of detection or “Auto”. So draw a box in the area of the camera to detect a change (see http://sviacam.sourceforge.net – which does this approach using CV).
- Ideally would run on Windows as a priority. But may run as a “library” for iOS apps so users could embed this in an app (e.g. Pasco https://github.com/acecentre/pasco and https://github.com/AceCentre/pasco/issues/231)
- Maybe useful: https://google.github.io/mediapipe/solutions/iris
Don’t worry about a text entry interface for this. There are a LOT of solutions that take keyboard strokes as a switch entry. Basically a voluntary blink should output as a “Space” keyboard character. Test it out with tools such as pasco http://app.pasco.chat (set to “automatically scan”) – or https://app.cboard.io/board/root (set settings to Scanning) or https://grid.asterics.eu (Input options -> Scanning)
Eye Movement Detection
Detecting eye movements – up/down/left/right, fast, reliably and in a range of conditions. This needs to output a keystroke.
Some people with conditions like Multiple Sclerosis or Motor Neurone Disease (ALS) – have no motor control except at the eye. For some they can use Eye Gaze technology to detect the eye movement and it maps to a mouse control – but for some this just doesnt work due to a visual difficulty. So for these people we would like to take the movement of up/down/left/right and use this for Assistive Technology. Take for example pasco – Its designed for visually impaired to “hear” the letters and phrases on the screen and select with a button. But for some we could make use of this four-way action – see the video here:https://www.dropbox.com/s/fgghsuogv0c3ki9/PascoDemo.mov?dl=0
Solutions like http://sviacam.sourceforge.net/ – would use OpenCV to detect an area of the image to detect pixel change. We can use this with an endoscope – to detect the pixel change of a pupil in two areas – but its very unreliable and fiddly to setup. There is a commercial solution https://www.eyecontrol.co.il/ – that does detect up/down/left/right and blinking – but its laggy, and expensive – and the Interface is too baked into the solution. We need to use the gesture – and have a way of it being used for other assistive technologies which commonly take a key press as a “switch”.
Dont worry about a text entry interface for this. There are a LOT of solutions that take keyboard strokes as a switch entry. Test it out with tools such as pasco http://app.pasco.chat (set to “automatically scan”) – or https://app.cboard.io/board/root (set settings to Scanning) or https://grid.asterics.eu (Input options -> Scanning)
Gesture Body Switch
Use a camera with depth sensing (or regular camera) to detect very small body movements such as a raise of a finger or a knee movement, or s small (less than 5cm) movement of a head to operate equipment such as a communication system.
We use small buttons (switches) for people with disabilities that they press with a body part to activate communication equipment or a home automation system. The problem is these switches are expensive – and have to be positioned in exactly the right place. Take for example a head switch – that button needs to be placed near the temple (forehead) but if the user’s position drops they can no longer access the button. A carer has to move the button in the correct place.
So a camera solution has been suggested (see http://gesture-interface.jp/en/gesture-interface/ – although this project is not active and no binaries exist). This software uses a depth sensing camera – which of course is available on some iOS devices. It might be possible to reuse that for our needs? It needs:
- a “calibration phase” – Place the camera in place of the body part to be detected and “rest”
- ask person to do the desired movement. Probably needs to do this a number of times.
- software “learns” this movement.
- outputs: A key press. If on Windows this could output a keypress such as a space key. If on iOS – this is more difficult but could the device emulate a Bluetooth HID device? If so it could just emulate a bluetooth keyboard – and a secondary device connect to it and when the signal is triggered sends a “Space” character (or other character).
Some people who use electronic Augmentative and Alternative Communications (AAC) systems to speak need a specific input device such as a specialist switch, sip and puff device or specialist joystick for text entry and cursor movement. This is often called the ‘access device’ as it’s the bit of tech that accesses the AAC device. The AAC device is often based on a mainstream device – Windows PC, Android tablet or iPad. Most commonly this is a Windows device. Often the user will want to control another piece of hardware such as an iPad or mobile phone. It’s not practical to have an access method for each device, or to unplug the access device from one device to the other – they may also require help.
Develop software that will allow text entry and cursor movement from the access device to be shared between devices. This may involve an app or piece of software on each device to send/receive inputs.
We have developed a hardware version of this using a BT Feather, the next step if to remove the hardware if possible.
AAC Multiple Screen
We’ve developed an app called Pasco that is an auditory-scanning AAC system (https://acecentre.org.uk/project/pasco/). In some use cases it would be extremely useful if the output from this app could be relayed or broadcast to other portable screens elsewhere in a user’s house for example, so other family members could view the messages when they are completing other tasks elsewhere, in the garden for example.
Develop a system that would allow the text output generated in Pasco to be broadcast to other small paired screens within a 20m range.
You could consider the additional screens as pager almost (remember those!) that have a small display to show text and that could send an acknowledgement that the message had been viewed. There might be only one additional screen, but there might be several – one in a pocket, one upstairs, one in the kitchen. It’s kind of a bit like this but sending text rather than an icon https://siliconsquared.com/.
Teach Morse code to a blind and physically impaired user.
Morse code can be an extremely useful input method and with practice it’s possible to reach good text input rates. More information here https://acecentre.org.uk/project/morse-code/
We need to teach people how to use a TandemMaster (http://www.tandemmaster.org/home.html) or other morse to keyboard solutions. With these – you can have more than just letters/numbers in morse. You can use all the keys of the computer and even mouse commands. So we need a way for a client to adapt and extend the morse set they are learning. For example:
- right arrow (–..-.)
- left arrow (..-.—-)
- enter (.-.-) AA
- space (-..-.)
- escape (—.)
- backspace (—-)
- full stop (.-.-.-) AAA
Then more application specific shortcuts for things like this for Dolphin Guide:
- F2 (Actions menu in Guide) (–..—)
- F5 (Read All) (–…..)
- Ctrl (pause / resume reading)
For some indviduals who have no speech, are visually impaired, and physically impaired – but we can find two “activation” body sites (switch sites) – we might consider morse code (if the user is literate). Morse is an incredibly efficient system – but it takes time to learn and is difficult to actually use without additional software to speak.
- Add in a way of editing or importing a morse-character set that extends the built-in letters (or overwrites existing ones)
- Add these and use local storage to remember the list and location of where you are up to
- Add a configuration screen to configure one, two or three switches (keys)
We need a “game” that teaches morse code – plus, slowly builds up the complexity to teach other functions like delete character, Space, Enter and maybe function keys (these can be mapped to a speak command maybe). The challenge is to do this gradually, having success at each stage and having a way for the user to do this independently. Going forward they might exit “game” mode and use this to speak.
Would respond to a Space and Enter character for two-hit morse code, or Space key and deal with the timing elements of morse for a one-key method. Maybe useful: https://morse-learn.acecentre.net and https://github.com/acecentre/morse-learn – background is buried in https://acecentre.org.uk/project/morse-code/ (watch the presentation)
See https://github.com/AceCentre/morse-learn/issues/12 – for the kinds of keys it would be useful to teach a user beyond just letters..
Social Script AI Chat Bot
Help people learn to use communication software in a failsafe way.
It’s really important for children and adults who are new to AAC to learn and understand how the system works.
We work with a range of disabled individuals who cannot speak due to either a cognitive or physical disability. It’s really important for children and adults who are new to AAC to learn and understand how the system works – but also learn slowly and have success at each stage. Typically we do this in clinics in staged scenarios. E.g. “Today we are going to play at shops. I’ll be the shop keeper and you are buying ingredients for a pizza” (They would hopefully find sentences like “Can I have..” & then a food name.. and please, thankyou, how much is that etc..) or an adult “What would happen if I just came into the room? Can you find me some words to use to say hello”? They would navigate to Greetings and find a sentence.
The problem is these sessions need to be repeated many times to increase success and are resource-intensive. What if we could have a piece of software that helps do these “social scripts”?
These “social scripts” need drawing up but we can help with that. The problem is the conversation can go a number of ways. And an important lesson is to deal with mistakes. So they might mishit something – on purpose “Can I tell you a joke?” – or by accident “Elephant” – and it should be dealt with appropriately “Did you really want to buy an elephant? For your pizza? I dont think we have those in stock!”. We need:
- App would have two sides. One for AAC user and one for Therapist (partner).
- The partner would be able to ; Load a script for them to trial, and secondly see progress of what they responded for previous scripts.
- So Therapist would pick your scenario “getting the register” or “taking to the doctor” or “seeing your friend”. The scenario then loads on the AAC user app and steps through questions. The aac user answers and the AI system replies appropriately.
- An alternate mode is “single player” mode – where the user selects the scenario. It doesn’t have a therapist interacting.
- App would need good encryption
- Most AAC users are using Text to Speech – so app should respond to Text to speech voices
- Should have minimal input for the communication aid user. They often have limited physical control.