At Ace Centre we meet a number of clients where traditional AAC solutions aren’t suitable. Here are some examples
Said is a 45-year-old man who lives at home with his wife. Said and his wife are Kurdish and have lived in the UK for 10 years.. Said speaks and understands English, his wife has limited English and speaks to Said in Kurdish. He has Motor Neurone Disease which has now affected his speech to the point others can no longer understand him. His eyesight is poor. He still operates a TV remote by holding the remote between his thumb and forefinger – but can only operate the 4 way TV controls. He wants to spell out whole words and communicate his care needs with carers. He has tried eyegaze technology in the past but this has not been successful.
Fen is a 56-year-old woman who lives in the UK, she is a first-generation immigrant from China. She has had a brainstem stroke limiting her physical skills to a small thumb movement. It is not clear what Fen can see – her family and care staff feel she can hear better than she can see. Prior to her accident she spoke and understood verbal Chinese (Mandarin) well with her family members. Fen now lives in a care home where staff are English speaking. Fen would like to make her care needs known to staff and her family would like to communicate further with her. Fen has begun to use a partner assisted auditory scanning system using a paper-based book put together with her family. As this system is in Chinese it is not accessible to care staff.
Joel is a 6-year-old boy who has a significant visual impairment and physical difficulties. He lives at home with his Mum, Dad, baby brother and dog. His family speak both Portuguese and English. He has trialled some AAC by auditory fishing (dragging his hand across a device and hearing a cue about what is underneath it) and selecting with a switch – but this has proven difficult. He can activate a switch well with his head. He has begun to use a partner assisted scanning book with his mother e.g to choose activities to play with; to make comments like telling his mum he loves her.
An examples of auditory scanning can be seen below.
In these cases, there is a range of issues but in general, we need to identify solutions that:
- Use one or four discrete (in close proximity) movements to operate a device.
- Be able to support different languages
- Support auditory cues. This may be with the cue in one language and the main voice in a different language
- Where a language cannot be supported with text to speech, it needs to be possible to have audio recordings for the cue and main voice.
- Really simple to edit the vocabulary items for support staff, with the ability to take something already in existence and import the language into a different system
Some auditory scanning solutions exist but these are all traditionally grid-based[KS1] systems which are designed to do multiple things like support symbol systems and access methods such as eyegaze. Often the language can be edited but it’s hard to import from something like a word document to the software. Few solutions can really support re-organising large blocks of language and “trees” (lists of words organised in categories or blocks) [KS2] of words quickly. If we imagine that, for someone who has a visual impairment, the words are not in a block but are in a list – then it makes more sense to have the words in a list. On paper based systems words and phrases are traditionally organised in lists rather than grids. This led to the question – can we replicate something similar on an electronic system to solve some of these issues?
Introducing Paul Pickford
Paul Pickford is an user of Assistive Technology and a past client of the Ace centre. You can view more about Paul in the video below.
Paul now uses head controlled technology to operate his computer and has been keen to raise funds for Ace Centre. He contacted us to donate some money to help us fund a bespoke solution to one or two of our clients. We identified the need from the case studies above and sketched out what was required. We are hugely grateful to Paul for offering this and hope that his input helps the development of AAC solutions for many individuals. The result is pasco – Phrase Auditory Scanning Communicator.
So what is pasco?
What pasco will do:
- Have an auditory cue and main voice, these can be different voices
- The cue and main voices can be recorded messages rather than text-to-speech
- The cue and main voices can be split between a wired headphone and the internal speaker of the iPhone
- Be able to spell using purely auditory scanning – speaking out each letter as it is written and then speaking out the whole word on a space or finish command
- The language can be created and edited using a simple text file – where each tree is just an indented list of words or phrases
- Support one switch scanning
- Support direct access, so that a communication partner can use it and/or so that clients who are able to make small movements are able to use it.
Try out the app on a Apple iOS device (iPhone or iPad) by downloading the free app here. The web version can be trialled at https://app.pasco.chat . Please note that the different audio for a headphone and device speaker is only available in the iOS app. For some support see: http://acecentre.org.uk/services/research/pasco-support/
Warning: Please remember this is very much a development project. There is limited support. If you are looking at this app to see if it is something that may help you or someone you are supporting please contact your local AAC service for guidance on whether this is the correct solution for your needs.
What do we expect to happen now?
pasco is an open source development project. We believe that to meet the wide range of needs of our clients a large number of solutions are required. As part of this mix of solutions, a broad number of individuals are best supported by a business model of good long-term support and maintenance. As well as providing a solution for a few niche use cases, we hope to inspire the development of future AAC software and solutions focused on this area of need. To do this we aim to share publicly case studies and data regarding the use of the app going forward on this page. Issues with our software will be publicly available here.