Notices
Results 1 to 22 of 22

Thread: Subset sum solver question

  1. #1 Subset sum solver question 
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    13
    Hi everyone! (I'm new to the forum, so this is my first post, any hint is always welcome!)

    I found some conversation around this topic in the science forum, so...

    I'd like to ask what could be a good benchmark for a subset sum problem solver, I have been working
    on an algorithm for a while and it has, in my opinion a very good performance...

    however I would like to know how I could stress the solver, what could be a good test set in your opinion...

    thank you so much in advance!

    j.


    Reply With Quote  
     

  2.  
     

  3. #2  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Anything without a solution will be a good test as the only real way to show there's no solution is by trying all the possibilities. (It might be possible to rule out some without doing the work, but not enough to reduce it's complexity.) So one example would be to take the first 20 or 30 powers of 2 and then check if there's a subset that sums to the next power of 2.


    Reply With Quote  
     

  4. #3  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    13
    Quote Originally Posted by MagiMaster View Post
    Anything without a solution will be a good test as the only real way to show there's no solution is by trying all the possibilities. (It might be possible to rule out some without doing the work, but not enough to reduce it's complexity.) So one example would be to take the first 20 or 30 powers of 2 and then check if there's a subset that sums to the next power of 2.
    Hi MagiMaster, thank you so much for your help!

    The processing time for the 20 elements set to get a "not found" has been 156 ms.
    The processing time for the 30 elements set to get a "not found" has been 50383 ms. (325 times slower)

    As probably you already know with the 30 elements set configuration you almost hit the maximum complexity level that is when N = P (being N number of bits (place values) required to represent the sum of all elements in the set and P the number of bits (place values) required to represent the target value), also the values distribution (powers of 2) was probably the better one to reveal the worst case scenario , quite smart stress test! love it!!

    The results times makes sense with this kind of low density sets and the algorithm way of work

    I didn't explain this... the solver is focused to work on very large sets (hundred to thousand of elements) and is not expected to be blazing fast in the low density sets/configurations, I guess there are better algorithms to handle these kind of set with dozens of elements faster than this....

    Also the solver is more effective when you know exactly the target value as several important optimizations take place that are related to the search space, for this kind of test ("not found") I disabled these optimizations so the algorithm processed the full solution space.

    Some other stress ideas? this one was really good!

    j.
    Reply With Quote  
     

  5. #4  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    You could try a large number of even integers and ask it for an odd sum. (In general, if all your set elements are a multiple of n, any sum of those elements will also be a multiple of n.)
    Reply With Quote  
     

  6. #5  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    13
    Quote Originally Posted by MagiMaster View Post
    You could try a large number of even integers and ask it for an odd sum. (In general, if all your set elements are a multiple of n, any sum of those elements will also be a multiple of n.)
    You are right , could be a good way to have all optimizations in place and execute the algorithm

    I modified a random generator I have to generate a set of 1K values, in the range of 1 to 2^32 and emit only even values, the sample is very homogeneous (first value is 2841312 and latest one is 4290108318), the generator were designed for this (not to group values in one specific range)

    first I started looking for odd sums, and with the optimizations in place the algorithm is working as fast as usual.

    With same input test:

    one 'YES test' taking 30 seconds (find a target value, in this case target was 4640122460 and subset use 37 values from 1000 available)
    other 'YES test', is taking 172 ms (find a target value, in this case target was 2081947039902 and subset use 989 values from 1000 available)
    other 'YES test', is taking 109 ms (find a target value, in this case target was 1734016240730 and subset use 903 values from 1000 available)

    The first tests falls in the worse execution time for the algorithm, that is, I randomly picked a target that represents a 'hard instance' for the algorithm, while the other times represent a let's say 'easy instance', I keep the 30 seconds here because this way I'll correlate a 'hard instance' with the "not found" test timing. (Algorithm has a parameter that if modified and using the same input set and target it outputs 'YES' with a 42 elements subset in just 281 ms instead of the 30 secs.)

    The 'NOT FOUND' test was run with same input set and It took 78 seconds in this case target was 4640122461.

    That was too a cool test, maybe I should focus to optimize the "NOT FOUND" in the same way I optimized the "FOUND" step,

    I didn't realize till now how hard I worked in the 'YES' area and that I didn't applied any important optimization for the 'NO' area...

    Might be interesting to have more test ideas... this one was very good as well!

    j.
    Reply With Quote  
     

  7. #6  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Well, once you find one subset that works, you can stop, but to show that none of them work you have to go through every possibility, so the no case is always slower. You should try a no test for 100, 1000 and 10000 elements and see what the times are. (Although I wouldn't expect the 10000 to finish any time soon.)
    Reply With Quote  
     

  8. #7  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    13
    Quote Originally Posted by MagiMaster View Post
    Well, once you find one subset that works, you can stop, but to show that none of them work you have to go through every possibility, so the no case is always slower. You should try a no test for 100, 1000 and 10000 elements and see what the times are. (Although I wouldn't expect the 10000 to finish any time soon.)
    Completely agree, you must let it go to validate the worse case (NOT FOUND) and finish the process, and you should go trough every possibility,

    However it depends how you handle every possibility, if you are thinking about different subsets it will be huge 2^10000 vs. 2^1000, what means a lot! (better not to talk about permutations )

    The unit of work of the algorithm is not a subset, it has much better granularity, so there isn't much difference between 1000 and 10000 elements, it is more time expensive obviously,
    as this impacts in a quadratic way the time use by a function in the core process, I just run a test on 10.000 elements set and has taken 681 seconds to execute a full 'NOT FOUND' process.

    As said before, the 'FOUND' case has been quite optimized, thus the speed it has, but I didn't focus to optimize the 'NOT FOUND', I suppose this happened because you always want to be really fast to find a solution but I understand that is the combination of both what makes the algorithm complete.

    I'll look to implement optimizations to make the 'NOT FOUND' process as fast as the 'FOUND' case, I mean a piece of the internal process to match a solution, not that I can solve a 'not found' in the same number of steps of a 'found', however the 'found' case has a kind of look-ahead mechanism that is able to test N^4 solutions with time complexity O(1) (being N the number of elements in the set),

    If you could think on some other tests I'll be happy to listen about them!

    j.
    Reply With Quote  
     

  9. #8  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    A test that requires a specific N/2 subset to reach the required sum should be fairly difficult too, but constructing such a set isn't as simple as the other tests.
    Reply With Quote  
     

  10. #9  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    13
    Quote Originally Posted by MagiMaster View Post
    A test that requires a specific N/2 subset to reach the required sum should be fairly difficult too, but constructing such a set isn't as simple as the other tests.
    do you mean having a set of N elements look for a target value that has is the result to sum up the elements of some of the subsets with N/2 elements?

    That test is not so hard to build up (excel could help)

    However this case falls in the 'YES' case thus in the optimized track (1K elements = 16ms, 10K elements = 46 ms).

    Anyway the chances that the expected output is the original N/2 elements subset or a subset with a length N/2 are really low because the algorithm can output any existent subset that add ups to the target value independently of the subset length.

    for instance, for the 1K result was a subset using 706 values from 1K available when the target value was made up from a subset of 500 variables
    for instance, for the 10K result was a subset using 7076 values from 10K available when the target value was made up from a subset of 5000 variables.

    The, let's say informally the "position of the target value in the problem space" is not something that hurts the algorithm but an advantage to solve the problem quickly, one of the optimizations use this information as an input.

    ...or.... maybe I didn't understand well the type of test you proposed this time, because when I re-read the proposal could be read also like a test for the partition problem...


    j.
    Reply With Quote  
     

  11. #10  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    I mean a test case where there's only one right answer consisting of one N/2 length subset. The only one right answer part makes it hard to solve (and also hard to construct).

    One possibility would be to take all the numbers from 1 to 5,000, which total 12,502,500 (which would be the target), and then another 5000 numbers greater than that, but it might not be too hard to throw away such numbers since there are no negative numbers to balance them out. If you do add in some negative numbers, you'd have to make sure that they couldn't be used as part of the solution. I suppose you could take random (positive and negative) multiples of something like 20,000,000, but then you wouldn't have all unique numbers since there are only about 200 such numbers that would fit in a 32-bit integer.
    Reply With Quote  
     

  12. #11  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    13
    Quote Originally Posted by MagiMaster View Post
    I mean a test case where there's only one right answer consisting of one N/2 length subset. The only one right answer part makes it hard to solve (and also hard to construct).

    One possibility would be to take all the numbers from 1 to 5,000, which total 12,502,500 (which would be the target), and then another 5000 numbers greater than that, but it might not be too hard to throw away such numbers since there are no negative numbers to balance them out. If you do add in some negative numbers, you'd have to make sure that they couldn't be used as part of the solution. I suppose you could take random (positive and negative) multiples of something like 20,000,000, but then you wouldn't have all unique numbers since there are only about 200 such numbers that would fit in a 32-bit integer.

    Let me think a bit how to handle this, might be possible to generate such subset, sounds like an interesting test, like a target value with surrounding noise.

    Anyway there is a place where this unique solutions happens quite naturally, that is in the "beginning" and the "end" of the combination set,...
    Reply With Quote  
     

  13. #12  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    13
    Quote Originally Posted by MagiMaster View Post
    I mean a test case where there's only one right answer consisting of one N/2 length subset. The only one right answer part makes it hard to solve (and also hard to construct).

    One possibility would be to take all the numbers from 1 to 5,000, which total 12,502,500 (which would be the target), and then another 5000 numbers greater than that, but it might not be too hard to throw away such numbers since there are no negative numbers to balance them out. If you do add in some negative numbers, you'd have to make sure that they couldn't be used as part of the solution. I suppose you could take random (positive and negative) multiples of something like 20,000,000, but then you wouldn't have all unique numbers since there are only about 200 such numbers that would fit in a 32-bit integer.

    Quote Originally Posted by MagiMaster View Post
    I mean a test case where there's only one right answer consisting of one N/2 length subset. The only one right answer part makes it hard to solve (and also hard to construct).


    One possibility would be to take all the numbers from 1 to 5,000, which total 12,502,500 (which would be the target), and then another 5000 numbers greater than that, but it might not be too hard to throw away such numbers since there are no negative numbers to balance them out. If you do add in some negative numbers, you'd have to make sure that they couldn't be used as part of the solution. I suppose you could take random (positive and negative) multiples of something like 20,000,000, but then you wouldn't have all unique numbers since there are only about 200 such numbers that would fit in a 32-bit integer.

    Ah, sorry for the delay.


    I have been thinking about this test case, and remembered one of the prior computations I did to work in the algorithm optimization, and the point is that for a 10.000 elements set, you have (2^10000) * 10000 partial subsets solutions aprox. that is much bigger than 2^46 (you need 46 bits to represent all the available space with a 10.000 elements set of 32 bits values, I'm assuming 32 bits values).


    There is no possibility to construct such set that is where N/2 exact elements provides a solution that no other subset is able to provide.


    I believe this one could not be processed


    I strongly believe that previous sample (the one that took 30 seconds to solve a 37 value subset in the 1000 elements set) is the effect you are looking for, the worst case.


    As I stored the subset, (it was interesting enough because this result), I can execute it again and show you some figures:



    Worst case execution
    ===============
    The number of subsets 'forwarded' where 11,737,246
    The number of subsets 'deprecated' where: 11,250,550,543


    and required almost 30 seconds to finish




    If a spatial parameter changes in the algorithm...


    The number of subsets 'forwarded' where 71.293
    The number of subsets 'deprecated' where: 48,850,972


    And this time it took 515 ms

    So the first case seems the worst case, that I believe is what you were chasing

    Let's continue with the tests,


    j.
    Reply With Quote  
     

  14. #13  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    I'm not quite sure what you're saying the 2^46 represents. (I know where you got it: 10000 * 2^32, just not sure why that number is important.) Naively, there would be 2^320000 bits in 10000 32-bit numbers, but that doesn't take in to consideration the lack of order or any other constraints that may be less apparent. (10000 different numbers from that range would be (2^32 choose 10000) which would be a bit smaller at about 2^200000 [https://www.wolframalpha.com/input/?...oose+10000%29].)

    Also, there should b 2^10000 - 1 subsets if you exclude the empty set, not 10000 * 2^10000.
    Reply With Quote  
     

  15. #14  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    13
    Quote Originally Posted by MagiMaster View Post
    I'm not quite sure what you're saying the 2^46 represents. (I know where you got it: 10000 * 2^32, just not sure why that number is important.) Naively, there would be 2^320000 bits in 10000 32-bit numbers, but that doesn't take in to consideration the lack of order or any other constraints that may be less apparent. (10000 different numbers from that range would be (2^32 choose 10000) which would be a bit smaller at about 2^200000 [https://www.wolframalpha.com/input/?...oose+10000%29].)

    Also, there should b 2^10000 - 1 subsets if you exclude the empty set, not 10000 * 2^10000.
    You are right, I just wrote down some of the internal properties the algorithm deal with to find the solution while writing the reply.

    That information is use to represent internally the 'solution space density' as 2^46 represents the maximum possible "solutions slots" for a 10000 elements set (supposing all values use 32 bits to represent the value and that all values have the MSB set, if you know the values upfront you can be quite accurate),

    You are right about the 2^10000 - 1, (the additional multiplication is something internal to the algorithm).

    What I was trying to say is that you have 2^10000 subsets and each one sums add to a certain value that only could be between 1 and 2^46-1, so we hace 2^10000 solutions to distribute into 2^46 "holes", so in my opinion is quite impossible to have a unique subset of N/2 that sums up to a specific value...

    do this make sense for you?

    j.
    Reply With Quote  
     

  16. #15  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Oh right. Yeah, that makes sense. On the other end of that though, that'd imply that the largest number of numbers you could use and still guarantee a unique solution is only 37, which doesn't seem quite right. (2^38 < 38 * 2^32.)

    Edit: I see. 37 is the most numbers you can have so every subset is unique, but it's fine to have duplicates as long as at least one is still unique.
    Reply With Quote  
     

  17. #16  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    13


    Going back to the ''NOT FOUND' case, I'm still thinking how I can optimize it..., at least to make it as faster as the 'FOUND' case.

    In fact I believe there is a way.

    Probably I will need some time to figure out how to compute this, the idea is simple, once you have exhaust the search space (being the search space the only possible places that the value could be) then stop.

    I can narrow the search space right now (spatial parameter) but I'm not 100% sure if I have exhaust the full search space once everything is processed, or if there are some potential space that has not been processed at all, so the only way to be sure is as you pointed before, to process the full problem, and this is the other reason because it take so long,


    Meanwhile I work on that I'd like to challenge a bit more the 'YES' case, so any additional idea is welcome!

    j.
    Reply With Quote  
     

  18. #17  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    I'm out of ideas for test cases. The only other thing to do is to throw lots of random numbers at the problem until you find something slower than expected. (Just make sure you keep a copy so you can look in to it more later.)
    Reply With Quote  
     

  19. #18  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    13
    Quote Originally Posted by MagiMaster View Post
    I'm out of ideas for test cases. The only other thing to do is to throw lots of random numbers at the problem until you find something slower than expected. (Just make sure you keep a copy so you can look in to it more later.)
    MM you have been really helpful, thank you so much to throw that tests to the algorithm they were so useful to adjust the next steps....

    I'll come back when I worked out a bit the 'NOT FOUND' scenario , hopefully I'll have some time in next two weeks to think about the optimizations I can do in this area


    thank you!

    j.
    Reply With Quote  
     

  20. #19  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    No problem. One final thing though. While actual speeds are interesting, they're also not terribly reliable as they depend on both the computer used and the specific data set being tested. If you've got some really good optimizations, it would be more interesting if you could prove their algorithmic properties. Just checking the Wikipedia article, the best listed algorithm takes O(2^(N/2)) steps in the general case, so if you can cut that down any, it's probably publishable. (Although if you're algorithm is restricted to 32-bit integers, it might be more comparable to the dynamic programming solution mentioned, which would be O(N 2^32).)
    Reply With Quote  
     

  21. #20  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    13
    Quote Originally Posted by MagiMaster View Post
    No problem. One final thing though. While actual speeds are interesting, they're also not terribly reliable as they depend on both the computer used and the specific data set being tested. If you've got some really good optimizations, it would be more interesting if you could prove their algorithmic properties. Just checking the Wikipedia article, the best listed algorithm takes O(2^(N/2)) steps in the general case, so if you can cut that down any, it's probably publishable. (Although if you're algorithm is restricted to 32-bit integers, it might be more comparable to the dynamic programming solution mentioned, which would be O(N 2^32).)
    Sorry for the delayed reply! (Just travelling!).

    What you explain is right, is not about the speed what is relevant is the time complexity.

    The reason I wanted to look for new tests to challenge the algorithm is to complement the tests I already have, this is quite useful to analyze the algorithm time complexity and be sure you do not miss anything important.

    Big O notation is always about worse execution time, so before to state the algorithm is O([something]) you need to check it, (triple check if your algorithm is performing better than state-of-the-art).

    For low density sets, I know the time complexity for ‘FOUND’ scenarios are in the order of O(N^2) … O(N^2 * N/2 ) and on the other hand, the ‘NOT FOUND’ case is different and I do not have yet a clear point of view about the algorithm potential related to time complexity in this area, additional work is required as in this case I know there much more to discover than what I have discover till now, I know what I should look to verify the best time complexity in the NOT FOUND scenario however is not a piece of cake to develop the required implementation/mathematical foundation for this, will take time…

    When N and P are quite different (N=1.000/P=32 , N=10.000/P=72, N=200/P=40), that is, density is beyond a known threshold both FOUND and NOT FOUND scenarios are solvable with time complexity O(N^2), and if I finally verify there are no exceptions then will be worthy to publish as there is no constants or constraints in the algorithm (elements or variable bits widths are not constrained to anything) and the time complexity is much better than any know algorithm.

    I do not feel like I’m in a hurry to publish something as in my opinion there are several open questions I’d like to answer before to write a paper, an alternative is to publish everything ‘as-is’ and let others (smarter than me ) answer that questions if they believe its worthy.

    The algorithm foundation is extremely simple (easy to describe how it works) and really hard to explain how this really happens (the implementation together with its mathematical foundation and the required optimizations) it take time as you must go step by step and decompose everything to the right level of detail so you do not miss anything.

    The best thing is that the algorithm is a real implementation, not just a theory, and you can do all tests you want to measure the time complexity on different scenarios, and then know what areas could require improvements (optimizations), as we did in the NOT FOUND case.

    The algorithm accuracy has been already settled and is not in doubt right now.

    I really appreciate your help with the test ideas, if you have some new do not hesitate to propose them!



    j.
    Reply With Quote  
     

  22. #21  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    an alternative is to publish everything ‘as-is’ and let others (smarter than me ) answer that questions if they believe its worthy.


    Ah, well, that probably wouldn't get published like that. But yeah, keep working on it. I can't think of any more test cases to add.
    Reply With Quote  
     

  23. #22  
    Forum Freshman
    Join Date
    Apr 2014
    Posts
    13
    Quote Originally Posted by MagiMaster View Post
    an alternative is to publish everything ‘as-is’ and let others (smarter than me ) answer that questions if they believe its worthy.


    Ah, well, that probably wouldn't get published like that. But yeah, keep working on it. I can't think of any more test cases to add.
    That's true!

    As said, really thank you for the provided ideas, some of them provided some hints on how to continue working!

    Will keep you posted

    j.
    Reply With Quote  
     

Similar Threads

  1. P = NP, subset sum problem solved.
    By lebrat in forum Pseudoscience
    Replies: 163
    Last Post: August 13th, 2014, 04:15 AM
  2. P = NP, subset sum problem solved.
    By lebrat in forum Mathematics
    Replies: 94
    Last Post: December 23rd, 2013, 10:53 AM
  3. Using Solver in an Excel macro
    By Bunbury in forum Computer Science
    Replies: 1
    Last Post: April 29th, 2011, 09:34 AM
  4. Sum G.P. question
    By Heinsbergrelatz in forum Mathematics
    Replies: 1
    Last Post: October 17th, 2010, 02:14 PM
  5. Is C a subset of C++?
    By LegalEagle in forum Computer Science
    Replies: 9
    Last Post: May 12th, 2008, 07:19 PM
Tags for this Thread

View Tag Cloud

Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •