to be fair it does seem to work for any two numbers where one is >1.
As lim x,y–> inf ln(ex+ey) <= lim x,y --> inf ln(2 e^(max(x,y))) = max(x,y) + ln(2).
I think is cool because works for any number of variables
using the same proof as before we can see that:
lim,x_i -->inf ln(sum_i/in I} e^(x_i)) <= ln(.
So it would only work for at most [base of your log, so e<3 for ln] variables.
I do think the upper bound on that page is wrong thought. Incedentally in the article itself only the lower bound is prooven, but in its sources this paper prooves what I did in my comment before as well:
for the upper bound it has max**+log(n)**. (Section 2, eq 4) This lets us construct an example (see reply to your other comment) to disproove the notion about beeing able to calculate the max for many integers.
I just remembered where I learned about that function, in this course on convex optimization that unfortunately I never had the opportunity to finishing it but is really good.
I don’t have a mathematical proof, but doing some experimental tests on excel, using multiple (more than 3) numers and using negative numbers (including only negative numbers) it works perfectly every time.
Try (100,100,100,100,100,101) or 50 ones and a two, should result in 102 and 4 as a max respectively.
I tried using less numbers, but the less numbers you use, the higher the values (to be exact less off a deviation(%-difference) between the values, resulting in higher numbers) have to be and wolframAlpha does not like 10^100 values so I stopped trying.
to be fair it does seem to work for any two numbers where one is >1. As lim x,y–> inf ln(ex+ey) <= lim x,y --> inf ln(2 e^(max(x,y))) = max(x,y) + ln(2).
using the same proof as before we can see that: lim,x_i -->inf ln(sum_i/in I} e^(x_i)) <= ln(.
So it would only work for at most [base of your log, so e<3 for ln] variables.
After searching a little, I found the name of the function and it’s proof: https://en.wikipedia.org/wiki/LogSumExp
thanks for looking it up:).
I do think the upper bound on that page is wrong thought. Incedentally in the article itself only the lower bound is prooven, but in its sources this paper prooves what I did in my comment before as well:
for the upper bound it has max**+log(n)**. (Section 2, eq 4) This lets us construct an example (see reply to your other comment) to disproove the notion about beeing able to calculate the max for many integers.
I just remembered where I learned about that function, in this course on convex optimization that unfortunately I never had the opportunity to finishing it but is really good.
I don’t have a mathematical proof, but doing some experimental tests on excel, using multiple (more than 3) numers and using negative numbers (including only negative numbers) it works perfectly every time.
Try (100,100,100,100,100,101) or 50 ones and a two, should result in 102 and 4 as a max respectively. I tried using less numbers, but the less numbers you use, the higher the values (to be exact less off a deviation(%-difference) between the values, resulting in higher numbers) have to be and wolframAlpha does not like 10^100 values so I stopped trying.