Just make a complexity analysis.
The most efficient algorithm in terms of memory and speed would be the fourth. Basically you have to look at the linear time and memory consumption of each algorithm.
In the first algorithm:
The string is iterated in linear time, looking for the number of characters given in the split array (passed as parameters in the method) and for each iterates the list up to
N , where
N is the length of the chain. Now, he will need to run the list and create
M temporary variables for each character in
Split , and then create a list of values by indexing which access is in constant time
As a result you will get
O((N * M) + 1) where
N is the length of
M the amount of
substrings generated in each operation of
The second algorithm:
Basically it is the same procedure as the first algorithm, only that here, it will consume more memory, because you will have to create an array of characters and create a temporary variable and iterate the
string that in this case has been
The third algorithm:
It's a double-edged sword. The complexity lies in the length or complexity of the rule, worth the redundancy. This should only be used if the rule is a bit complex, validate emails, addresses, number formats, mentions and hashtags, etc ... For example, if you were not to use Regex to validate mentions or hashtags in a string, you would have to create a giant algorithm and Intervals tree to obtain the indexes where each mention or hashtag is located. To work with strings of massive quantities, a lot of memory trying to get all the substrings that are mentions or hashtags in giant strings. Regular expressions should be used as a validator of complex strings, since they avoid creating a gigantic algorithm. Obviously in this case, it is the one with the most complexity and memory consumption.
For the fourth algorithm:
int posInicial = cadena.LastIndexOf("(") + 1;
int longitud = cadena.IndexOf(")") - posInicial;
resultado = cadena.Substring(posInicial, longitud);
You should iterate twice the length
N of the string and then get the result in
N , so the complexity would be
O((2 * N) + N) .
So in a serious top:
O((2 * N) + N) the fourth algorithm.
O((N * M) + 1) the first algorithm.
O((N * M) + 1) the second algorithm. The first algorithm consumes more memory.
O(?) the fourth algorithm. Regex is the most complicated and the one that consumes more memory. In advance you can know which is the one that has the greatest complexity due to the process involved.
Keep in mind that in your example these times are insignificant (none of them reach
1ms of processing). So if you want to see the result in a better way, you would have to try it with a giant length for the chain). This answer is based on my experience in the algorithm, if someone is willing to document and contradict me or find an error, I am available to discuss it.
You can read the documentation for the analysis of Algorithms Understanding Big O Notation or This link is more complete .