Hello guys,
I have created the free app PiccyBot that speaks out the description of the photo/image you give it. And you can then ask detailed questions about it.
I have adjusted the app to make it as low vision friendly as I could, but I would love to receive feedback on how to improve it further!
The App Store link can be found here:
https://apps.apple.com/us/app/piccybot/id6476859317
I am really hoping it will be of use to some. I have earlier created the app 'Talking Goggles' which was well received by the low vision community, but PiccyBot is a lot more powerful and hopefully useful!
Thanks and best regards,
Martijn van der Spek
Comments
claud opus removed?
i just talked about how good it is, and now it's not there any more.
as for the personality thing, the regular output is still in the text. that is, if you have personality on, the voice output and text are different. you can read the "non personality output" using voiceover.
Claude Opus
Hi LaBoheme, there are some problems with the Claude Opus model since about a day. It is mixing up images. Had to disable it while working on it, hope I can add it back soon!
The Google Gemini model has been upgraded and has improved significantly from what I can tell. Have a look at that as well?
Thanks for your efforts!
Thanks for your efforts first. My thought has already been mentioned above. About image description, Be My AI feature of Be My Eyes app is sufficient for me now. I additionally suggest video description for your development, or maybe simply an epub ebook reader first. Since the voice quality of VoiceOver has dropped suddenly, the voice is very muffle now, AI voices for reading text an document is very necessary.
Thank you for all your kindness again.
Update to fix sharing larger images
As Martin reported, there was an issue with the sharing of images to PiccyBot. I have just released an update that should fix that. Please try it out.
The bug reported by PrivateAI was related to the Claude 3 Opus model. It somehow gets stuck on an image and/or mixes multiple images. I have disabled the model for now to sort this out.
The settings page has also been updated to refresh the image more consistently, as pointed out by Mr Grieves.
Thanks guys, keep bugging me and I'll see what I can do to further improve the app.
I'm grateful but...
Hi Martijn
It's still amazing regardless if I can't still share photos directly from X with the new update but I can always use other AI apps to get the descriptions... it's not a dealbreaker. I'm still paying for the monthly subscription...And yes please add the opus AI back to the AI models ASAP! I used that 1 the most but will give the other ones a go... thanks a lot for your help and work! You are doing an absolutely outstanding job! I really do appreciate this.
Opus working again
Thanks Martin! I hoped that the fix for sharing larger images to PiccyBot would have helped you, but there is clearly still some thing to be done there. Will check that.
Claude 3 Opus is behaving again and I have added it back as an option. Good luck! If this or any of the other models responds wrongly, let me know please.
great, opus is the best
interestingly, one would think haiku is better than sonic, but sonic is actually better when it comes to image analysis in my own experience.
The voices are for fun.
If you want to use voiceover without them, turn them off. The dev has every right to pspice up their app with voices if they want to.
@lotty
You're very critical of this app and I don't see why? Just stop using it if you don't like it. The dev clearly just wants to have a bit of fun, they've done tonnes more by implementing other LLMs compared to BeMyEyes which is still just using chat gpt 4.
As for not wanting oppinions, in that case; stop posting, because you're going to get oppinions on your posts and their replies, that's how this hole thing works.
I'm not atacking you, if I was; you'd know.
I always find these kind of arguments to be amusing. It's attacking when someone disagrees with people like yourself but when you go after apps that's ok, right? I mean after all you're just wanting them to be improved, right?
It's not like your oppinion could be seen as an attack on the devs hard work, come on Lotty, I thought you were smarter than this.
Questioning is aloud; of course it is but if you put your oppinion out there then you must be prepared for people to find it to be wrong/disagree with you, that's how the internet works.
And I do disagree with you, I find some of your replies on this thread to be overly harsh for no reason. Just because an app is made for the blind doesn't mean we should crap all over the work if it doesn't meat our standards and before you say it, no that's not me being soft, it's me being tired of these harsh attacks towards devs with no consequences.
As for voices costing money, sure they do, but i'm sure the dev knows this and doesn't mind paying for them as it's a fun product for them.
If you really don't want to use the voices, pay for the product and turn them off, although i'm quite sure the free version has that now.
If you want to see this as me jumping to the devs defence; so be it, they're making an app that they enjoy and honestly? they didn't need to go as far as they have, it has tonnes of language moddels and even though I don't use it any more; i'm very impressed with that fact.
I just remembered something, the devs first release didn't have voiceover reading support, we got on here and asked for it and not only did we get it but buttons were labeled, language moddels were added and more, so yeah, i'm jumping to this persons defence because this is a damn good app with amazing response times from the dev.
@lotty you can think what you like about me, that doesn't bother me anymore, i'm a big boy and have my big boy pants on, but no, I won't stand for this crap, the dev is responsive and is very agreeable to changes and damn it, that's to be commended.
May just be the fancy of a mad man but...
So with all this discussion regarding the included voices, some say they're really nice, some don't think they are all that useful... Earlier, I came across a youtube video with a guy reviewing all the AI chatbott APPs out there...and during the video, he mentioned how lousy the voices are on some of these, and the idea just popped into my head: What if this is not just a photo description APP? What if you can chat with the AI with its super nice voices. I immediately went to the APP and tried to chat with the AI. Unfortunately, it doesn't seem to want to talk about anything not related to the photo- and won't talk at all without a photo. And of course, the response speed is too slow and draggy for it to be a good chat AI, but I can totally see this being a great function if the response time is quick, and you can chat to the AI without describing photos.
Now I don't know anything about how complicated it is to turn a Photo description APP into a chatbott APP, but I figure the voices are there, personalities are there, the AIs are there, so it may be quite doable.
BTW, there's a chatbott APP, called Call Annie, that can chat, do scenarios, Role Play, and even describe photos. The voices it uses suck though :)
Thanks all
Thank you Brad for your support despite not even using the app. I enjoy developing these apps as the feedback from the community is great and inspiring.
Lottie, the app is set up like this at the moment and I don't want to change too much, I realise it is not ideal but I have to balance the income and costs as I am not sponsored by any outside party. like some of the other solutions are. It is what it is.
PrivateAI, thanks, I created chatbots before. My app Voice Answer was actually a hit at the time when Apple launched Siri. But I don't want to mix chatting in PiccyBot, instead remain focused on image recognition and perfect that as much as possible.
I have just added one more model to the model list of PiccyBot, Llama3. It is not the best model at the moment, but the special thing about it is that I am running it on my own machines, so there is no dependence on third parties for the image processing. This means full control to improve it specifically for low vision purposes and costs will be minimal. Try it out when you can please.
Llama3 is actually quite good,
what kind of machine is it run on?
Laboheme, this local Llama3β¦
Laboheme, this local Llama3 currently runs on a single pc with 32Gb ram and a RTX 3060. It uses the smallest Llama3 model. Next step would be to run a larger model on a more powerful machine. But I agree, it's a very encouraging start.
@privatetai
I too would recommend pi ai if you want to chat, it's a bit hit and miss but the british male voice is quite nice in my oppinion, you can preview each voice too and it's free.
The accessibility of the app could do with a bit of work and as far as I know; there's no real way to delete your chats apart from deleting your account so think about that if you like to remove chats like i would. It's not even because of anything sexual or anything like that, i'd just like a bit more control.
PiccyBot updated with OpenAI's new GPT4o model
Hi guys,
I have added OpenAI's new GPT4o model to the list of AI model options to work with. For now, it is only accessible to signed up users. If it works well I'll use it as the default model in PiccyBot later this week.
To get the best impression of GPT4o's improved speed, set the voice to 'None' and put personality off.
I expect to make more use of the new OpenAI features and integrate them into PiccyBot the coming days.
Good luck trying it out!
Checked it out
I think the spead has improved somewat. But the groundbreaking thing would be to integrate the incredible things which the Open AI guys were demoing yesterday. Otherwise, the quality of the describtions are more or less the same, and honestly I think they are as good as they can get.
Hey ...
Dear developer, I don't usually have the time and energy to come back and check this but I do eventually. I have a few questions for you.
Whatever happened to Claude 3 opus? Is it coming back? Which AI model gives the most accurate description of photos? Does it depend on which voice I choose to get accurate descriptions? Does it only depend on which model I choose to get the most accurate description? Are you going to implement any of the new open AI development into this app such as live image description? I would really enjoy that. I hope this app does not become obsolete when all of that gets implemented into ChatGPT and when Be My Eyes implements it into their app. I see that you've added the image processing of GBT 4O. I appreciate your response time and the quick development of everything you've done for this app so far. It's awesome! This is possibly one of the best apps I've dealt with over the last couple of months due to all the amazing development you've done. I'm happy that you've implemented so many tools from the ideas of the paid supporters... it's very fun to you to help us. I'm so grateful for you. The main reason why I support this app is, I like the fact that I can choose the different voices, the personality if I wanted that since that makes me laugh and also the choice I have of AI models. I enjoy getting all of those different models to describe my different types of photos then save them to my albums with descriptions. I was once fully sighted so photos do mean something to me. βΊοΈ
Response to Martin
Thanks a lot for supporting PiccyBot! Due to changes at Anthropic, I have taken Opus off for now, but will add it back as soon as I can. Or maybe replace it with a new model by them, things keep moving. I did add Gemini Flash. I feel the current Gemini models are underappreciated.
I definitely aim to implement any new development by OpenAI into PiccyBot as soon as it becomes available.
You are right, first ChatGPT and then Be My Eyes will most probably be the first to include OpenAI models into the apps. However, as an independent developer I hope to benefit from new developments by Google, Meta and Anthropic as well, and pick and choose elements from the best.
I am now trying to add a video into PiccyBot. No promises on speed but it is the way forward so let's start with it.
I created a quick demo of loading PiccyBot on Vuzix smartglasses and have it describe the environment while I walk into my office: https://youtu.be/o9QeVxnkvzE
Getting this to work smooth will be the challenge, but with the current AI developments it could be weeks rather than months.
Once again, thanks a lot for supporting PiccyBot! Exciting times ahead for sure!
That part...
Very cool development Martijn!
The way you have engineered that into smart glasses is amazing. You should definitely be proud of that.
I'm looking forward to seeing what develops with this app when open AI liberates that out into the public. In the meantime, thank you for your response, and all this cool stuff you're doing... I'm certainly looking forward to the progression of your AI models and hopefully as soon as possible. Thank you for the response...God speed & blessings to you.
Smart glasses...
Are those smart glasses available to the general public and if so, how do we get a pair? I hope they're not very expensive but maybe they have a subscription model possibly. I'm interested in any info about these!
You're welcome for my support.
thank you.
Too expensive for the blind/V.I.
I always get excited about these inventions but they market it to a community who is not able to afford it which is sick to me. It makes absolutely no sense how they make these cool inventions for our community but we can't afford them! I heard about the Seleste smart glasses too and that was more affordable for me but sending them this money up front Like that then waiting for several weeks to get it seems outrageous for me. I understand that this is a small Neach community but they certainly need to work with us by understanding our finances and make things more accessible and affordable for us to use.
I know this was off-topic however, I like that these things are available but I don't like that I can't easily get them.
Oh well. Thanks for putting that info out there.
Affordable solutions
I agree to aim for the most practical and economical solution. I think at the moment that is simply developing PiccyBot further as an app. Everyone already has a phone, so let's use that. Or otherwise use a device like smartglasses linked to that phone. The glasses I used here are standalone and that has its use for institutions where it can be shared between users easily, but for individual use not as much.
So I believe apps and your phone are not dead yet. I will keep the focus on PiccyBot. Glasses do catch quite some attention though ;-)
I honestly would prefer glasses.
at least with those you don't have to take out your phone eatch time.
I'd not mind paying a monthly fee, but to own the glasses not to keep paying for them.
I'd not mind donating to surver costs but i do want to buy the glasses outright if this becomes an option.
i'm really not a fan of the subscription moddle way of things.
The Ai sounded scared lol, it's like, um, there's a door, please don't end my existence!
Thank you
If you're able to get the app to describe videos, then, can you please post a demo? Thank you for all the hard work that you're doing. I truly appreciate it, and customer feedback is amazing. Keep up the good work
The issue with describing video.
It sounds great on paper but, let's say you have a 20 minute game play video, will you be able to run it through an AI and then after it's done processing, listen to it with the audio or or will it be on the fly.
Also, this AI isn't trained on audio description data so I don't know how useful it will be.
That's a bit much
Realistically, I wasn't necessarily thinking of videos that are 20 minutes long, more like less than five minutes, like music videos, or short presentations.
Live videos
@the dev: very interesting that you're trying the live video thing out. question: are we able to ask specific realtime questions and get the answers on the fly? for example: in your demo, can I ask if the chairs are empty or occupied? Or, where is a given door wrt my position etc. And, will it be able to guide me towards an empty chair/doorhandle on being prompted?
Pay it forward...
Martijn, I appreciate the forward thoughts on this project.
It would also be appreciative if these companies would actually pay it forward by given some donations to these users who can't afford the smart glasses. I am sure they're making enough profit by the prices of them so why not try to make the company look better by offering Some unfortunate individuals a pair or 2 several times a year. People like me with so appreciate that.
In the meantime, yes smart phones do a good job as well and they are quite expensive too.
Some of us have payment plans for our smart phones so we can have them. *wink
Looking forward for your next update on PiccyBot.
By the way, I was wondering if you could remove that one screen where you get to pick a photo from your albums and you get to make adjustments to it because I just skip past that part I don't know why it's there but it certainly would not be missed.
I am legally blind and I don't think I would care if the photo needs to be rotated or not but that's just me.
it's cool app, but quite buggy
Hey there, I really like this app, it's more detail descriptions with voice feetback than Be my eyes, but it's buggy. sometimes after send picture to recognise it makes another picture, and my phone is away from thing I captured. Sometimes I need press few times share audio, and this button suppose to be on main app window, not in settings, I should have ability to choose what I want to share in the main app window.. I understand this is thirst version, and hopefully buggs will be fixed.
Initial version with video processing
Hi guys, I have added video processing to PiccyBot. It's still a work in progress, the sharing of audio doesn't always work, the quality of the descriptions is not the best yet and the speed has to improve still. But since quite a few of you had mentioned you were looking forward to this, I decided to release this version.
Now there are two buttons, video and camera, and you get the option to take a direct video or photo, or select a video or photo from your image library.
The upload of the video takes the most time, depending on your network. After that you can go to settings (if subscribed) and switch voices, personality, language or ask specific questions about the video, which will be faster.
Have fun playing with it and let me know what your impressions are? I'll take along the feedback from you guys as always.
Live feed.
personally I'll wait till these appps get live feeds but again I have to commend you on getting features out so fast, this app is so cheep for what you get guys.
Just cause this video part isn't for me, just yet, does not mean by any means that it's bad, I've not tested it and honestly if there was a donate button, I'd gladly donate.
I've never been so impressed with an app that isn't for me every single time an update comes out, well done dev, well done!
Video processing
First off, thanks for the wonderful work! Going to try it!
Btw, does video processing here mean live processing? or is it that I have to take a video and upload it? Also do I have to select any particular model for this?
A couple of thoughts
I can't seem to ask further questions for either videos or images. On an iPhone SE 2020 running the latest version of IOS.
Also, could we have a haptic feedback option when images or videos are being processed? Similar to the Chat GPT app if you're familiar.
Cannot find the option
I can't find any option to capture a video or upload one. How/where do I locate it?
Where are the pixies?
Ever since this was posted, I was convinced this app was called Pixie Bot. Last week I made the mistake of letting my wife use my phone and she found that it is actually called Piccy Bot. Well, that's not how VoiceOver says it! Very disappointing.
Anyway, I just wanted to say I was using this quite a lot over the last 2 weeks on holiday - it was nice to be able to take pictures with the Meta Ray-bans and then later on send them to the pixie for a proper interpretation. (Meta AI is OK for a quick note about what you are looking at but not a patch on these models). I hadn't noticed that the app had switched me to Gemini and it was incredibly good. I should also say that I probably use this in preference to other apps because of the tts voice it uses. My wife enjoyed listening to it too, particularly when I turned on personality mode and it made her sound like a celebrity.
The only thing I found is that if you are listening to a lot of photos you took on the same day it can be a bit tiresome hearing the same old comments about the sky or overall impression etc etc. In some ways it would be nice if I could have pointed it at a day's photos and the pixie could have given me a single detailed description encompassing all the photos. But that might be just this specific use case.
It's unbelievable to have a video option suddenly appear. I tried a few videos - they mostly worked fine in that they told me some of what the video was about but the level of detail is nothing like with photos. I'm hoping this might improve a little over time as it's an exciting new feature.
I love seeing the new things being added to this app - thanks so much to the dev.
Video option
Make sure you are on the latest app (pull down to refresh in the app store when looking at updates just to be sure). Then there is a Video option that is just before Microphone in the app. So four finger tap at the bottom to get to Camera, then swipe back twice.
As far as I can tell there is no way to do this via the share sheet, but hopefully that will come soon!
piccybot does not appear in share sheet outside camera/photo app
for example, in safari, one can tap and hold an image and tap share and choose any of she sharing option without download the image first. usual suspects such as mail, message, etc. all showing up, but piccybot is not there. i don't know why it only shows up in the photo viewer and not other places.
re: latest update, video description and such
First let me say, I love how much update it's being done on this APP. It makes me feel that even when there's glitches and bugs, or things we don't like, it'll get addressed sooner rather than later.
Personally, I favor using Claude Opus (I miss my Claude Opus!) and/or Lama3 cause those two are the best when dealing with (sometimes) nude or erotic artwork as I have to identify and label/catalog them in my work. While Gemini and GPT 4O have consistantly the higher quality of description, they are so heavily censored they'll even reject none-nude art subjects even if when it's just a semi-erotic statue. Lama3 does give really weird description at times, such as the other day I asked it to describe me and it said I was covered in hair (which I'm definitely not), and it said I was wearing "A dark shirt, a dark gray shirt, blue pants, and pants" ROFL. WTH
Now on to the latest. I love the potential this video description will bring, but needing to have the video on your phone first is a drag. Be allot better if we can say, find a youtube video, and then use the "share" function to share it to the APP. Also I think the APP only describe the first 30 seconds of the video. I thought if I scrub the video to say, 10 minutes 30 seconds, it'll describe the scene there, but it doesn't appear to do that. Also many videos are simply returning "server error."
There again, this video description is very new, so I expect in 3 to six months all this would become none-issues.
Thanks for the continuous update and improving of the APP!
Nice updates! My input...
Hi Martijn.
This was fun but the descriptions are lack luster and bland when personality is off & the description of 1 video by using AI model LLAMA 3 was strange! It told me something that did not happen after I confirmed it with a friend who sent me the video.
After that I stopped using LLAMA LOL
I took various short videos of my apartment which is nicely decorated, before I lost my eyesight, I was heavily into interior decor and watching lots of HGTV for all these great ideas so I decided to make my place stylish but using this app to get the descriptions by video was disappointing. It's not your fault.
I hope this gets more advanced but in the meantime you have done very well. Bravo! I'm proud of you for a cool new feature...
even though there's bugs to kick out but keep up the good work.
Please add a share sheet option for the photo gallery situation for videos, all the error messages are terrible also.
Demo of PiccyBot
I followed the developer of this app on YouTube and stumbled upon this on my feed.
https://youtu.be/AnigBgi-_Ow?si=X_1pGQ5O8-hmfCEz
The thing is...
It's not trained to be an audio describer, if it were then we'd get a lot more out of this.
We'd need to run the video through an app to get the markers and all that sinked up but once that's done, it would be pretty cool.
Re: What we want from videos
I think Lottie asks a good question, but like photos it depends a lot on specific use cases.
For example, I mainly use this for looking at memories - either holiday snaps or other things. And I was trying to use the video option in a similar way. So in my particular case I wasn't really needing audio description, I just wanted it to be able to give me a decent amount of detail to help me know what was going on. For example, it told me one video had my wife on a beach playing fetch with my dog, and it was good to get that but it hardly made me feel like I was there and getting a sense of what was going on. Another video was of my other dog no longer with us who was making funny noises in the dog bed whilst the recent cup final was on 5 live. The description picked up on the audio and told me the dog was there and looking tired but that was about it. I'm not sure if she was looking tired because she was in a bed or if she was actually looking ill, but it wasn't really what I was hoping for. I guess I wanted something that would give me more of a sense of her personality.
But if I was looking at an instructional video or trying to accomplish a task specifically then my requirements might be entirely different.
I think it might be good to have different options for videos - in terms of verbosity, AI model and so on. For example, Claude seems good for images, but it told me that there was a voice commentating on my dog in the video and she certainly wasn't heading in the winner against Chelsea as far as I was aware.
Updated app. The video compression was too high
Hi guys,
Another update is available. The audio sharing of videos should now always work. In the previous version I went a bit overboard with the video compression. Too many details were lost.
In the current update compression is more at a middle ground, and PiccyBot should be able to pick up more video details and progress.
I am thinking of adding a toggle in settings to put video compression on or off, or even the amount of compression (as Mr Grieves was suggesting). So you can decide for yourself if you want to wait longer for a better description.
Note that at the moment the model selection in settings only works for images. The video model is fixed still. Other settings like voice, personality and duration will work on the video description.
The sharing of videos to PiccyBot is not working yet, hopefully next update.
Thanks for the feedback, keep it coming!
Video compression
@Martijn yes, having something like a slider as to the amount of compression would be the optimum. The problem before the update was that the video was being described almost like how a photo would be. Meaning the AI wasn't picking up on any movement or progression within the video.
app not showing up in share sheet
I noticed since the last update, I'm no longer seeing the app in the share sheet.
a couple issues
As of last update, the quary history is not showing up to be used as quick prompts. Also, using Lama3 and switching to Chinese language, it would randomly generate in English rather than Chinese, and many times the Chinese generated text would bug out and start repeating itself like a broken CD. Like it'd say, The person has a long beard and is wearing a blue shirt, the person's shirt is blue and he has a long beard, the person's beard is long and his shirt is blue, the person has a long shirt and is wear a blue beard...and on and on and on. LOL Quite amusing actually.
@Martijn
Having fun with the new updated version.
I used one of your other apps called VoiceArt and decided to gather several photos of a dream home that I gave good detail description to and it created the images for me so I put all of those images into a new photo album and created a video and it described the video which was about 30 seconds long, it gave wonderful scene descriptions of each room, details about the furnishings etc. and it improved the whole experience. Loves it.
I'm proud of the work you have put into this.
For now, I don't think you need to do anything else but enjoy the fruits of your labor until you feel like you need to improve or add some other feature.
You're welcome!
Thanks for all that you do.
For example
The video showcases a tour of a modern, luxurious home. We start in a living room with floor-to-ceiling windows revealing a stunning view of the ocean. The room is bathed in warm sunlight, and the soft white curtains billow gently in the breeze. The focus then shifts to the kitchen, a space brimming with sunlight streaming in through a large arched window. White cabinets line the walls, contrasting beautifully with the dark countertops. The camera pans over a home gym, equipped with a treadmill, a stationary bike, and a weight bench, showcasing a dedicated space for fitness. Next, we step into a home office, featuring a spacious desk, comfortable chairs, and a large window overlooking a lush green forest. A plant sits on a small table, adding a touch of nature to the room. The video takes us through a luxurious bathroom with twin sinks, a freestanding tub, a large shower enclosure, and a skylight overhead. The walls are adorned with tasteful tiles, and the room is both elegant and functional.
We then see a home theater with a large projection screen, plush leather seating, and dim lighting. The screen displays a vibrant scene with a woman laughing and holding popcorn. Next, the video showcases a master bedroom, with a king-size bed draped in soft linens. The space is punctuated by a stunning chandelier, a plush armchair, and a large rug. We then glimpse another bedroom, with a similar king-size bed, but this one features more modern dΓ©cor. The room is brightly lit with a large window offering a tranquil view of the outdoors. The tour concludes with a peek into a laundry room, with a stainless-steel sink and a set of modern washing machines. The video ends with an exterior shot, showcasing a relaxing patio with a hot tub, a fireplace, and a grilling station. The entire video offers a glimpse into the spacious and elegant lifestyle of the homeowner.
This was usingPiccyBot AI Model , GPT 4o
However, when I did each photo individually and got a more detailed description then I knew exactly what was being referred to in the video because I did get detailed descriptions of each room & outside backyard using Be My AI.
It's funny using all this technology to see the world through my iPhone.
I did it! I finally downloaded this.
Hi Martijn and all,
I finally decided to give this app a go after hearing a demonstration on the Double Tap podcast. I always thought it was a pixie too, untill I actually looked at the spelling! The guys were pronouncing it as 'picky'. I didn't try this app at first because I thought it would be as good or as bad as Be My AI, but I liked the voice so decided to give it a go. I have some feedback if you don't mind. First, could you get the app to autodetect languages? Be My AI and GPT do this. The reason I say this is because I deal with texts in multiple languages, for example, English and something else. By the way, it reads text from screenshots of ebooks really well. It can read lots of languages, but I have to tell it which language apart from English I want it to recognise. Also, could you please allow people to share a URL to a video, or to share videos directly from sites like YouTube. So Olly and Lottie, you were asking or suggesting what we want from video descriptions. How about something like this.
The YouTube video is titled, "5 Best Languages To Learn in 2024." a man is speaking. He is wearing a gray sweater and is positioned slightly to the right of the center. Around him are five colorful circles, each with a number inside:
1. Red circle with the number 1
2. Blue circle with the number 2
3. Green circle with the number 3
4. Yellow circle with the number 4
5. Purple circle with the number 5
The background shows a cozy room with bookshelves, a couch, and a lamp. That was from Be My Eyes on Windows. I know this is like a picture, but isn't audio description on TV describing pictures on screen anyway? The only real difference is that audio descriptions on TV or as part of a film tell you which character is doing what. Maybe the descriptions could be divided into frames? Yeah I know, this is like sci-fi. I get it if Martijn, this sort of thing isn't possible! This app is worth supporting. I purchased a lifetime subscription today. This has so much potential. It's even great as it is now. Thank you for making an app like this! It's yet another tool we can add to our arsenal. I think I've quoted that from someone else.