Thursday, November 4, 2021

How do you apply algorithmic design and data structure techniques in developing structured programs?

The best way to begin solving which algorithms and data structures you should use in your structured programs are to understand that there is no one-size-fits-all. By understanding the common algorithms and data structures and their strengths and weaknesses, you are well on your way to becoming a better programmer. In the following paragraphs, we will dive into some key definitions and concepts for implementation in your structured programs.

Data Structures

When developing a program, the programmer should start with studying data structures and then the algorithms that manipulate them. The way information is structured in a program has varying effects on its processing efficiency. That said, what is a data structure? A data structure is a representation of how data is organized so that it can be used efficiently (Lysecky, R., Vahid, Lysecky, S., & Givargis, 2015). Data structures can solve various problems, including searching data, processor speeds, and handling multiple requests (Shaffer, 2013). For example, a sorted list of integers stored in an array is one form of structuring. Some others are linked lists, hash tables, stacks, queues, graphs, and trees.

According to Shaffer, you should follow these steps when selecting a data structure to solve a problem (2013, p. 24):

1. Analyze your problem to determine the basic operations that must be supported. Examples of basic operations include inserting a data item into the data structure, deleting a data item from the data structure, and finding a specified data item (i.e., the data and the operations to be performed on them)

2. Quantify the resource constraints for each operation (i.e., the representation for those data)

3. Select the data structure that best meets these requirements (i.e., the implementation of that            representation)

The above steps are a data-centric approach to choosing the best data structure for your problem set. As Shaffer alludes to, resource constraints on key operations (i.e., search, insertion, and deletion) usually drive the data structure selection process (2013). Because each data structure has associated costs and benefits, a programmer must be diligent in the data structure selection process and remember that there is not a single data structure to employ in all situations. For example, a linked list is good at adding and deleting items but bad at retrieving and searching items. As a programmer, you should carefully analyze your problem’s characteristics to determine the best data structure for the task.

Algorithms

Next, we need to understand algorithms. We can think of this as a step-by-step procedure, defining a set of instructions to be executed in a certain order for a desired output (Lysecky, R., Vahid, Lysecky, S., & Givargis, 2015). In other words, it states how the data will be manipulated. Now, we certainly want our program to be efficient, and we measure that by complexity in the worst-case scenario (worst case meaning a large input size).

One technique to employ is asymptotic analysis, which estimates the resource consumption of an algorithm. This allows us to compare the costs of multiple algorithms for solving the same problem (Shaffer, 2013). It measures the efficiency of an algorithm as the input size becomes large (its complexity). Complexity is divided into the following two categories:

·        Time: a function describing the amount of time an algorithm takes in terms of the amount of input to the algorithm (Zeigler, 2004). We stray away from real-time because many factors unrelated to the algorithm can affect it, including the programming language used, type of computing hardware, programmer proficiency, compiler optimization, etc.

·        Space: a function describing the amount of memory (space) an algorithm takes in terms of the amount of input to the algorithm (Zeigler, 2004). This is usually calculated by the extra memory needed for an algorithm, not the memory needed to store the input itself. While space complexity is sometimes ignored, there are times when it should be considered.

Typically, you will analyze the time required for an algorithm and the space required for a data structure. There is often a time-space tradeoff, meaning one has to compromise for either computing time or memory consumption, depending on the algorithm chosen (Complexity analysis, n.d.). We must focus on the asymptotic complexity of the algorithm: When n (the number of items of input) goes to infinity, what happens to the algorithm’s performance?

Big O notation is used to mathematically describe how the running time of an algorithm behaves relative to the input size. In other words, we use it to describe the performance of an algorithm to determine its scalability. Scalability is also referred to as growth rate, and an algorithm is best when it has a constant (linear) time complexity. However, not all problems can be solved in this manner, so quadratic, logarithmic, and exponential growth rates are other considerations for programmers.

While that was a lot to digest, it is important to understand the basics of algorithmic design and data structures. With the knowledge set forth above, you should be well on your way to understanding what to consider when choosing a data structure or algorithm. Through understanding the basics, you can achieve programming efficiency!

Reference

Complexity analysis. (n.d.). Retrieved from http://www.cs.utexas.edu/users/djimenez/utsa/cs1723/lecture2.html

Lysecky, R., Vahid, F., Lysecky, S., & Givargis, T. (2015). Data structures essentials. Retrieved from https://zybooks.zyante.com/#/zybook/DataStructuresEssentialsR25

Shaffer, C. A. (2013). Data structures and algorithm analysis. (Edition 3.2). Retrieved from http://people.cs.vt.edu/~shaffer/Book/JAVA3elatest.pdf

Zeigler, J. (2004). Time, complexity, space complexity, and the O-notation. Retrieved from http://www.leda-tutorial.org/en/official/ch02s02s03.html

No comments:

Post a Comment