Can you find a(n) as explicitly function of n?

For natural number n=(1, 2, 3, ...) and sequence definition:

a(n) = a(2n)+a(2n+1) =

= a(3n)+a(3n+1)+a(3n+2) =

= a(4n)+a(4n+1)+a(4n+2)+a(4n+3) =

= a(5n)+a(5n+1)+a(5n+2)+a(5n+3)+a(5n+4) =

...

Trivial a(n)=0 is out of question.

Update:

@mattmiller, there must be a(3) = a(9)+a(10)+a(11), and you have 1/2 = 3/8?

Update 2:

@Torquestomp, this is a real question, if there are enough independent equations?

I have sequence, rounded on 3'rd decimals, so some error might be ocure, but all conditions are pretty obey:

1,000; 0,585; 0,415; 0,322; 0,263; 0,222; 0,193; 0,170; 0,152; 0,137; 0,125; 0,116; 0,107; 0,099; 0,093; 0,087; 0,083; 0,078; 0,074; 0,070; 0,067; 0,064; 0,062; 0,059; 0,057; 0,054; 0,052; 0,050; 0,049; 0,047; 0,046; .....

Update 3:

Actually, I got this series when I trying to find distribution of the first digit in the sequence of 2^n, for large n. First, I'm notice (by calculation in Excel) that same distribution ocures for 3^n, 4^n, 5^n, etc., then I find that it's happend in the same way (or very similar) in sistems with base diferent than 10. So, my example above is from sistem 100-bases, and for first digit in 2^n, for n= 1 to 10000. Than I thinking on this way:

After number with first digit "1", (when multiplying by 2) there must be next number with first digit "2" or "3", so p(1) = p(2) + p(3), where p is probability. After "2" there must be "4" or "5", so p(2) = p(4) + p(5), etc. But, if distribution is realy the same for 3^n, then p(1) = p(3)+p(4)+p(5), because after "1" as first digit, and multiplying by 3, there could be only 3, 4 or 5 as a first digit in next number, then p(3) = p(9)+p(10)+p(11), and so on... I'm hope you will understand. :)

7 Answers

Relevance
  • 1 decade ago
    Best Answer

    Yes a(n) = log[1 + (1/n)]

    (which gives probability of getting "n" as leading digit)

    Thanks Dragan K for sharing logic behind your sequence.

    You should have mentioned this earlier that would have saved lot of time :)

    Really interesting !!! I answered Bonus question asked by 太极拳 ‽-ℓ-‽ about first digit of 2^n but it never occurred to me that Benford's law when extended can produce such strange results :)

    Following sequence is what you will get for log(1+(1/n)), this indeed will satisfy all conditions!!!

    0.30103, 0.17609, 0.12494, 0.09691, 0.07918, 0.06695, 0.05799, 0.05115, 0.04576, 0.04139, 0.03779, 0.03476, 0.03218, 0.02996, 0.02803, 0.02633, 0.02482, 0.02348, 0.02228, 0.02119, 0.0202, 0.01931, 0.01848, 0.01773, 0.01703, 0.01639, 0.01579, 0.01524, 0.01472, 0.01424, 0.01379, 0.01336, 0.01296, 0.01259, 0.01223, 0.0119, 0.01158, 0.01128, 0.011, 0.01072, 0.01047, 0.01022, 0.00998, 0.00976, 0.00955, 0.00934, 0.00914, 0.00895, 0.00877, 0.0086, 0.00843, 0.00827, 0.00812, 0.00797, 0.00783, 0.00769, 0.00755, 0.00742, 0.0073, 0.00718, 0.00706, 0.00695, 0.00684, 0.00673, 0.00663....................

    I would also like to add that a(n+1) approximately equals Harmonic Mean of a(n) & a(n+2), Mirya E had figured this out long time ago. I do not know why this works, but seems really interesting!

    *~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~*~

    Toruqest:

    log(1 + 1/n) works because of unique property of log

    i.e. log(x) + log(y) + log(z) + ........... = log(xyz............)

    let's see what really happens:

    log(1 + 1/n) = log[(n+1)/n]

    Notice that our required sum from a(kn) to a(kn + k-1) must give a(n)

    inside log it will be [(kn+1)/kn] * [(kn+2)/(kn+1)]* ......* [(kn+k-1) / (kn+k-2)] * [(kn+k) / (kn+k-1)]

    which will reduce to log[(kn+k)/kn]

    = log[(n+1)/n]

    = log[1+ 1/n]

    = a(n)

    Thus any function f() where f(x) +f(y) + f(z) + ...... = f(xyz......) will give us such sequence with (1 + 1/n)

    So look for such functions :)

    Source(s): Excel VBA
  • 1 decade ago

    I don't know if this really helps, but let me just throw it out there:

    If we let equation k represent a(n) = a(kn) + a(kn+1) + ... + a(kn+(k-1)), then it suffices to prove a(n) obeys equation k for all prime numbers k. All composite numbered equations can be derived from the primes.

    For example, given a(n) = a(2n) + a(2n+1), we substitute:

    a(2n) = a(2(2n)) + a(2(2n)+1) = a(4n) + a(4n+1)

    a(2n+1) = a(2(2n+1)) + a(2(2n+1)+1) = a(4n+2) + a(4n+3)

    Therefore a(n) = a(4n)+a(4n+1)+a(4n+2)+a(4n+3)

    ***

    Also, since all the equations defining a(n) are linear, a(n) can be modified by a scalar without violating its properties. So, arbitrarily, we can assume a(1) = 1

    -- EDIT (+1 hour) --

    Sorry, a(n) can't exist. Here's my proof:

    If a(n) can exist, then for any arbitrary positive integer N, we should be able to define a sequence of real numbers, not all 0, such that all equations whose variables are contained in the sequence are satisfied.

    Let's use the "k equation" I defined earlier. We have shown that all composite k equations can be derived from the prime number equations. I will assert here that all finite extensions of the general prime number equations are linearly independent: if anyone wants to crunch the arithmetic and prove it, go ahead.

    Now for each N, there are exactly [(N-p)/p] equations for each prime number that govern the sequence, where [x] is the greatest integer function. For N = 26 elements, there are 27 equations. If you understand Linear Algebra, you know this is a problem.

    It gets worse. I tested out N up to 2000, and for N > 26, the number of equations is consistently > N, and the margin increases linearly. For N = 87, there are 123 equations. For N = 2000, there are more than 3000 equations.

    The only part left to make the proof rigorous is to show that the finite extensions of the prime number equations are in fact linearly independent. The thing is, even if they were, I doubt there would be enough linear dependence to account for the massive margins.

    -- EDIT (+9 hours) --

    I'll see if I can crunch the matrix to find out how many independent equations there are. If I get a concrete contradiction, I'll let you know.

    -- EDIT (+23 hours) --

    The expression [(N-p)/p] I mentioned earlier is incorrect - the correct expression is [(N+1-p)/p]. Allow me to explain this:

    Forr any prime number p, we have exactly one finite equation:

    a(k) = a(pk) + a(pk+1) + ... + a(pk+p-1)

    ...for every natural number k.

    If we have a finite sequence of length N, we can only include the equations (p, k) where pk+p-1 <= N in the sequence axioms.

    Therefore, for any given prime number p, we can solve for the maximum value of k:

    pk+p-1 <= N

    pk <= N+1-p

    k <= (N+1-p)/p

    Therefore k_max = [(N+1-p)/p].

    ----

    @Dr. Octavian: Interesting theory, but I don't think the logic holds. For your deduction a(1) = 0 to be true, we have to take twice as many left-hand expressions as we do right-hand expressions, which isn't algebraically valid.

    Consider this:

    n = (n-1) + 1

    1 = 0 + 1, 2 = 1 + 1, 3 = 2 + 1, ..., etc.

    If we take the sum these equations to infinity, then on the left hand side we will get all integers from 1 to infinity exactly once. We get all integers from 0 to infinity exactly once on the right hand side, so if we subtract all integers from 0 to infinity, we should get:

    0 = 1 + 1 + 1 + 1 + ... (infinity times).

    This is obviously false.

    We can't assume that the sum converges just because the terms have a 1-to-1 correspondence. The terms must have a 1-to-1 correspondence within a finite set of equations for us to take a limit for infinite equations.

    Personally, I would be rather disappointed if a(n) = 0 was the only solution. I'm still working on my matrix program - it would be really cool if the equations take on some sort of prime-sieving-behavior, such that we always have enough dependent equations to allow for a consistent system. Hopefully I will have something useful soon.

    -- EDIT (+2 days) --

    Yay Vikram P! We finally got it!

    It's remarkable how much linear dependence there is among the equations, even when only the prime numbers are considered.

    I wonder, however - could another function exist?

    Given the log(1 + 1/n) argument, it can be seen that any pair of functions a and g satisfying the equations:

    a(n) = g(n+1) - g(n)

    g(n) = g(kn+k) - g(kn) (.. for all integers k > 1)

    Will give us the sequence we desire.

    @Dr. Octavian - Let me just redeem myself here by saying that you have already brought up the point I was subconsciously trying to address: If you attempt to take the limit of anything towards infinity, you cannot perform algebra on infinite quantities unless you either know they converge ultimately, or know they converge in a 1-to-1 correspondence in terms of the operation you are performing. Now, with Vikram P's formula, we know for a fact that the sum of a(n)+a(n+1)+...+a(2n-1) coverges to log(2) as n --> infinity.

    Also, just so you know, O(1/n) is the minimum order for divergence, not O(1/n^2). As can be seen from the Riemann-Zeta function, O(1/n^k) converges for any real number k > 1.

  • 1 decade ago

    Okay, I'd like to throw my hat into the ring. Please leave this question open to give me more chance to think about this interesting problem. I believe that such a series of numbers can exist, but I don't know if it could be expressed in terms of elementary functions.

    Edit: Please don't close this, I'm still working on this. This is an really interesting problem for me, it reminds me of some calculations I used to do when I did image processing work, on the subject of image deconvolution.

    Edit 2: Vikram P just notified me of his solution, and I see here now. Yes, definitely that is a solution, kudos to Vikram P for having found it. All i can say is that I suspected that it would be a function based on the Logarithm function, but Vikram P has beat me to it. Who knows if I would have ever found it.

    This has been a very interesting subject. Thanks for the excellent question and a very satisfying solution.

  • Anonymous
    4 years ago

    f(2) = 3f(1) + 4f(0) = 3*1 + 4*0 = 3 f(3) = 3f(2) + 4f(1) = 3*3 + 4*1 = 13 f(4) = 3f(3) + 4f(2) = 3*13 + 4*3 = 51 Note: where you say n > 2 it should say n ≥ 2, otherwise you cannot do this.

  • How do you think about the answers? You can sign in to vote the answer.
  • 1 decade ago

    @ Vikram P: ...but does that necessarily mean there can be no closed analytical form for the sequence? Certainly it will be unlikely to find on using the equations or linear algebra alone, but I hold out hope for other methods at the moment; unless you can explain to me otherwise, I can't see why there's any more reason to believe that there isn't a closed form for this sequence than to believe that there is.

    ... also, I think I can see an error in your calculation - you say

    "a(kn) = a(2kn) + a(2kn +1)

    So we always have to define a(2kn) which can be any value less than

    a(kn) and after which a(2kn + 1) = a(2kn) - a(kn)."

    Shouldn't this be "a(2kn + 1) = a(kn) - a(2kn)"? Does this change the result? Also, Even if 0 < a(2kn) < a(kn), this doesn't mean that a(2kn) can take just -any- value in the range, since its value is fixed by the other equations which we haven't mentioned. e.g. for 0 < a(4) < a(2), a(4) cannot take just any value between 0 and a(2), since it is also fixed by a(4) = a(8) + a(9), a(2) = a(4) + a(5), a(4) = a(2) + a(13) + a(14) etc. In a way, all constants have to be chosen 'simultaneously' so that they agree with no only their first generating equations but all later correspondances.

    Unless I misunderstood what you meant?

    Sorry to be a pain if I am; I'm actually hopeful for finding this sequence so any information on whether it's possible or not is good news. Good job for getting the correct sequence though and reproducing the code :) we know one exists - can we just find a way of expressing it?

    PREVIOUS POSTS:

    I've just been working on this, and it's a doozy :) Dragan, that does indeed appear to be a sequence which fits the conditions... did you get it from a computer?

    I just wanted to point something odd out, which I came across, and can't explain. The sequence indeed exists, of that I have no doubt (and Torquestomp - you may be right about the equations you get being impossible to solve in linear algebra but I think that just means there's freedom in the solution, and maybe a lot of that freedom is disallowed by the problem or something, I'm not sure); but consider this: take the first equation

    a(n) = a(2n) + a(2n + 1)

    and write it out for n from 1 to infinity:

    a(1) = a(2) + a(3)

    a(2) = a(4) + a(5)

    a(3) = a(6) + a(7)

    ...

    a(k) = a(2k) + a(2k + 1)

    ...

    It is clear that there is no 'overlap' on the right hand sides: the left hand sides contain one of every term from a(1) to infinity, and the right hand sides contain each term from a(2) to infinity exactly once. But surely this means that if you sum all the equations to infinity, you get

    a(1) + a(2) + a(3) + a(4) + ... = a(2) + a(3) + a(4) + ..

    and subtracting (a(2) + a(3) + ...) you get that a(1) = 0! Worse, using each successive equation for a(kn), you get that a(k - 1) = 0, obviously for all k! Since we have a numerical solution to this, this is clearly nonsense :(

    ... or is this simply a sign that this sum of the series we're looking for is not uniformly convergent? Or something? Did I just unwittingly prove something about the series without realising?

    @Torquestomp: "The terms must have a 1-to-1 correspondence within a finite set of equations for us to take a limit for infinite equations." - why exactly? Many results about 1-1 correspondance only hold for infinitely large sets and do not hold if we truncate these sets before infinity.

    Even if you have to take twice as many equations on one side as the other, you still end up with one term each when summing to infinity. The fact that we need to go down to twice as many equations is irrelevant, since there are an infinite number of them, and when summed, there is only one equation after all, which contains all terms once as constructed. This just means that to cancel terms we have to look further over in the sum on the right than we have to on the left - this isn't a problem with an infinite sum. I mean, write out the whole sum to infinity, and check every term on the left is matched by exactly one on the right - if you can prove this isn't true then fair enough, but it is true and provable. This trick was used many times by Cantor and Hillbert to prove things about the denumerability of sets and the sums of series. This would not be valid if we terminate the equations at a finite point, as you say; but taken to infinity, it is perfectly algebraically valid - there are aleph-nought left hand sides and aleph-nought right hand sides, aleph-nought terms altogether, all but one of which can be placed in 1-1 correspondance.

    I'm thinking of for instance when you place the even numbers in 1-1 correspondance with the naturals. To do this for a finite group, you may have to take twice as many numbers in one sequence to get the evens to generate a finite expansion, but when taken to infinity, this isn't a problem, since there are always more numbers. To say this is invalid because you have to look at twice as many numbers in one sequence than you look at in the other is erroneous. Again, if one terminated the sequence, it's obvious that the correspondance fails - not every number in the set {1, 2, ...N} can be put into 1-1 correspondance with the evens in the same set - there are only half as many. But when taken to infinity, there are the same number of elements in the naturals as in the evens, and the correspondance holds, as Cantor showed. Here, there are always more equations; once we get down to the kth equation which requires the 2kth and (2k +1)th equations, we still have these somewhere below in our expansion... note that in the equation a(1) + a(2) + ... = a(2) + a(3) + ..., there are -the-same-number- of terms on each sides of the equation, that is, aleph-nought + 1 = aleph-nought on the left and aleph-nought on the right. The fact that these can cancel term by term just says to me that what the above equation actually says is a(1) + infinity = infinity; i.e. that the sum taken from a(2) diverges.

    I would argue that you have shown the same thing with your 0 = 1 + 1 + ... example as I think I have shown - you have tried to sum the natural numbers in that case and implicitly assumed that it comes to a limit, otherwise you couldn't take that sum off both sides, showing that if you try and sum a series that isn't convergent in the first place (just like the sum of the natural numbers isn't in your example), you get nonsense answers. Just to clear things up - I didn't assume the sum converged because of it's 1-1 correspondance; what I meant was that if this a(1) = 0 holds, then the only mistaken assumption I have used is that I can subtract a(2) +a(3) +... from both sides - if this isn't allowed, it must be because that quantity is infinite.

    I'm not arguing using my logic that the only sequence is a(n) = 0; as I say, it's obvious a solution exists numerically. I was just pointing out a possibly explicable anomaly :)

    PS if this does mean the sum doesn't converge, that at least tells us something - for instance, it tells us about behaviour of single terms for large N (e.g., they must be greater than O(1/n^2) for divergence, etc.)

  • 1 decade ago

    1, 1/2, 1/2, 1/4, 1/4, 1/4, 1/4, 1/8, 1/8, 1/8, 1/8, 1/8, 1/8, 1/8, 1/8...

  • 1 decade ago

    in another recurrence, it is the same as

    a(n) = a(n+1) + a(n^2 + 2n)

Still have questions? Get your answers by asking now.