• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required as is 20 years of participation in forums (not all true). Come here to have fun, be ready to be teased and not take online life too seriously. We now measure and review equipment for free! Click here for details.

Low light and AI

stunta

Senior Member
Patreon Donor
Joined
Jan 1, 2018
Messages
384
Likes
188
Location
Boston, MA
#1

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
2,162
Likes
864
Location
UK
#2
I'm sceptical, but it's nice work if you can get it.

I presume the idea is for the system to (implicitly) identify contiguous regions in the image and 'patch over' them with smoothly interpolated colour. I'm sure it works perfectly - until it doesn't..!

I speak as someone who once became obsessed with neural networks as the answer to everything, but gradually began to understand that although they miraculously seem to obviate the need for complex mathematics and deep understanding, they are really just transferring it to the problem of analysing and selecting the right training data!

Many of these AI systems work at 90% effectiveness on the very first day you try them with a smattering of data. It is very easy to 'sell' the project at this stage. (To yourself and other people).

Three years later and running a massive server farm of training and testing data, you're at 80%. But the system will be useless unless you get to 99.99%. How long can you spin the project out for..? At first you thought you would be happy to do it forever, but you'd really much rather go off and do something else..! :)

Edit:
Just as an example. Suppose you decided that a common problem in smartphone photographs was 'chopping off feet'. Could AI help here?

Well, you could indeed show some neural networks thousands of pictures with chopped off feet against the desired image, having created these examples from real photographs. The networks would 'learn' what was expected, possibly learning to implicitly identify legs and to extend them to an average length relative to head size (that it also identified) and to place on the end of them some commonly recurring blobs that it saw in the examples.

But would it substitute the right sort of shoe? If you only showed it training shoes it would only know about training shoes. If you showed it many types of shoes, it might learn to associate women with high heels and men with trainers, but on the other hand it might find that its error is least if it substitutes a fuzzy composite average of all shoe types. Would it show the shoe in the correct pose? Again, if you don't get the balance of examples right they might end up as a fuzzy blob or pointing in the wrong direction. This is where an attempt is being made to substitute for understanding of human anatomy and analysis of perspective with dumb network training. The problem has simply been transferred to the selection of training data. Perversely, the scientist might start analysing the training data with algorithms before submitting them to the neural net training system! Or, failing that it is a long hard 'ad hoc' slog towards ultimate failure three years later...

The above example is 'obviously' doomed to failure, but many other examples may be equally doomed without it being obvious. If you are using neural networks in this way, it shows that you don't really understand the problem so cannot predict whether it is solvable or not.

Another dimension is the 'hilarious' results that might occur - and the problem of the Twittermob turning on you if your system ends up with inadvertent racial, gay or transexual prejudices built in!
 
Last edited:

stunta

Senior Member
Patreon Donor
Joined
Jan 1, 2018
Messages
384
Likes
188
Location
Boston, MA
#3
There is a difference between improving SNR and generating new signals, so I am not sure your analogy with the "chopping off feet" is applicable in this context.

Having worked on AI systems for the past few years, the general pattern I have seen is that it gets worse in real production scenarios before it gets better. There are several iterations before the solution stabilizes - this is just how real world products go in the field of AI, especially if the target audience is diverse.

But the system will be useless unless you get to 99.99%
What is the basis of this statement? Acceptable accuracy varies depending on severity of false positives or false negatives. We generally use at least two metrics that go hand-in-hand - precision and recall. With some features we sacrifice recall quite a bit in favor of precision and in some cases its the other way around. Four 9s may not be a requirement especially if the feature is optional. If the FBI is using this provide evidence towards serious charges, then yes, I can understand a high precision requirement. If I am using this for casual photos of my dogs, I can live with a few bad results if a majority of them made the photos more usable.

That being said, I don't know if DNNs is the best solution to this problem. The industry has this tendency these days to use ML as the first option. IMO, ML should be considered only if traditional solutions do not do a good enough job. I think this is what the paper is trying to say as well - that their ML solution is superior to existing solutions.

I think this is promising at best. The positive takeaway for me is that camera manufacturers and post-processing software makers at least have some place to start to improve on SNR in low-light photography.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
2,162
Likes
864
Location
UK
#4
There is a difference between improving SNR and generating new signals, so I am not sure your analogy with the "chopping off feet" is applicable in this context.

Having worked on AI systems for the past few years, the general pattern I have seen is that it gets worse in real production scenarios before it gets better. There are several iterations before the solution stabilizes - this is just how real world products go in the field of AI, especially if the target audience is diverse.



What is the basis of this statement? Acceptable accuracy varies depending on severity of false positives or false negatives. We generally use at least two metrics that go hand-in-hand - precision and recall. With some features we sacrifice recall quite a bit in favor of precision and in some cases its the other way around. Four 9s may not be a requirement especially if the feature is optional. If the FBI is using this provide evidence towards serious charges, then yes, I can understand a high precision requirement. If I am using this for casual photos of my dogs, I can live with a few bad results if a majority of them made the photos more usable.

That being said, I don't know if DNNs is the best solution to this problem. The industry has this tendency these days to use ML as the first option. IMO, ML should be considered only if traditional solutions do not do a good enough job. I think this is what the paper is trying to say as well - that their ML solution is superior to existing solutions.

I think this is promising at best. The positive takeaway for me is that camera manufacturers and post-processing software makers at least have some place to start to improve on SNR in low-light photography.
I am talking about the idea; the general situation of thinking "I'll get neural nets to to do it". In this case, the neural network may, in fact, be synthesising 'new signals' for the image - in order to improve apparent SNR. i.e. a neural net will do whatever it can to minimise its training error and it doesn't necessarily look at the scene in the way we do.

We might be able to tell that a section of patterned wallpaper stretching into the distance is the same texture shrinking in size. The neural net doesn't (necessarily) have any knowledge of perspective or how repeating patterns work (if it hasn't been given the right examples) so it may do something very odd with it, and replace the whole thing with large patches of uniform colour close up, and then a single average shade in the distance - or whatever. You get the idea.

The 'feet example' was a more extreme version - I wrote:
"The above example is 'obviously' doomed to failure, but many other examples may be equally doomed without it being obvious"
to highlight this.

Sure, 95% reliability may be enough for pictures of dogs, but not for a self-driving car, say - again I was just talking about the idea of using automated systems that are based on 'AI' not necessarily this specific example.

Why did I chip in with my now obviously-shown-by-you-to-be-stupid comment? Well, I was hoping that it might shed a small chink of light (pun not intended but spotted before publication anyway) on a subject that, to most people, is a closed book. Perhaps I am hinting that a report with some neatly-wrapped up images can sometimes be a somewhat optimistic view of work like this. My final year dissertation regarding a specific audio DSP algorithm included an audio recording (on cassette - how quaint) and I can tell you that it was very impressive - but very selective! :)
 
Last edited:

RayDunzl

Major Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
5,362
Likes
1,333
Location
Riverview, Fl
#6
Top Bottom