Clean Coding: Step Two

Burak Acar
11 min readJan 23, 2022

In the second part of my clean code journey, I looked into several ideas that might help us develop clean OOP code. The foundation of these structures is to build a clean, focused, and intelligible structure, which allows the software industry to evolve and achieve a specific standard. Let’s have a look at it together.

So, what are the fundamentals that this essay will teach you? Don’t repeat yourself, Single Responsibility Principle, Open/Closed Principle, Liskov Substitution Principle, Design by contract, Interface Segregation Principle and Dependency Inversion Principle. Do you have the time to improve your software development skills?

DRY (Don’t repeat yourself)

The Dry principle’s foundation is nearly identical to that of the other principles. The goal is to build a single piece of code that can be used for all operations. Code blocks that repeat themselves are particularly costly in terms of readability and time consumption in the product you build. This idea instructs us to collect all of our common codes in one location and then use them from there.

When the DRY principle is followed correctly, changing one element of a system does not necessitate changing other, logically unrelated pieces.

“There are only two hard things in Computer Science: cache invalidation and naming things.” — Phil Karlton

Furthermore, logically connected items all change predictably and evenly, and are therefore kept in sync. To adhere to the DRY concept across layers, Thomas and Hunt use code generators, automated build systems, and scripting languages in addition to methods and subroutines.

“Every piece of knowledge must have a single, unambiguous, authoritative representation within a system” The Pragmatic Programmer

SRP (Single Responsibility Principle)

The Single Responsibility Principle corresponds to the letter “S” in the SOLID principles acronym. You should build a class that is so concentrated that there must be several reasons for it to change. A class should only have one task and should be completely focused on it. He should have adequate expertise and responsibility for this task, but no further effort or knowledge should be required. It should only be modified when absolutely required, and it should not be influenced by changes in other works.

It’s similar to the “Separation of Concerns” theory, which states that you may be effective if you concentrate on only one aspect of the task at a time.

Work relating to each class should be included. Other than his own effort, he shouldn’t store anything.

Even if it is basically defined through the class, if there are too many responsibilities within a class, the reason for change can be many. For this reason, besides the whole class, it is necessary to organize the functions in it with the same thought, in such a way that they only do the work they are responsible for. In this way, if a change is necessary within the class, only the functions with related responsibilities are changed and the problem is solved without the need to change the entire class. In fact, we can think down the function and come to the point where we only need to do one job in a single line. This single transaction should be considered as a task that will not pose a comprehension problem.

In the code above, multiple operations are performed in a single line, but these patterns are not considered separate to SRP as they are frequently used and easy to understand. If you write the code by formatting it in the second approach to improve readability as you type, this can be considered a bonus for the readability of the code. Performs many actions on the same object and returns the result without affecting the primary object of the operation.

“I can list many features I know for clean code; but one covers all other features. The clean code always seems to have been written by someone who values it.”

— Michael Feathers

The most essential benefit of the Single Responsibility Principle is that it minimizes the impact of software programs on one another.

Consider Java: a class you build may do several tasks. These two activities have a transitive connection in which they influence one other. If you modify one of these functions, you must also check to see if the functioning of the other has changed. This adds to your test time and affects readability.

Open/Closed Principle

Extension and modification of software builds should be possible. It is continuing by adding new features without modifying the old ones throughout expansion and addition of new features.
The fundamental logic of being closed to change means that no changes are made to the source code. Classes, interfaces, and methods that are already in use must be preserved.

When software is first introduced, the concept is to totally segregate portions that may be created or replaced in the future from sections that are known to be non-replaceable. When a new feature is added, it improves the product but does not cause any difficulties since the portions that must continue to function remain unchanged. This helps dealing with dependencies in new developments and determining which sections of the introduced code will have an impact easier.

The Open/Closed Principle’s most essential benefit is that it increases software reuse and makes maintenance easier. This lowers the cost of the product while also reducing the time it takes to add a new feature.

Let’s start with a straightforward example. If you pass an object instead of data as a parameter to a function, you’ll be able to utilize the initial function you developed for any subsequent subclasses produced from the object. We don’t make any modifications using this method, but we do give expandability.

The Open/Closed Principle’s notion of modifying the existing is a definition that is hard to apply in practice. It is impossible to update a code without altering it. Furthermore, even if no modifications are planned, refactoring is essential for every code. As a result, even if OCP is an impossible principle to implement, it is vital to strive to get as close to it as feasible throughout software development. It is the responsibility of both the product architect and the software developer to strive to apply this notion.

Liskov Substitution Principle (LSP)

Liskov is a feature in most OOP enabled software languages that basically assures the right usage of the inheritance structure. What this means is that when you make a subclass, it should be able to perform all of the tasks in the superclass correctly. So when it replaces the superclass, it doesn’t produce a mess. It may, however, have its own inventions on top of that.

“The clean code can be read and improved by other developers other than the developer who developed it. There are unit and acceptance tests. It has meaningful nomenclature. There is only one way to do something. It has very little commitment and provides a clean API.” — Dave Thomas

A square and a rectangle are used to illustrate the subject. The code for this example may be seen below. A rectangle is a superclass in this case, whereas a square is a specialized subclass. Square is a rectangle in real life, and it may be accepted as a subclass, but is it appropriate to do so during software development?

Is a technique for changing the height or width of a square and a method for changing the height and width of a rectangle, for example, the same? We can suppose that the square is derived from the rectangle mathematically. However, it does not replace the Square Rectangle in terms of behavior, thus this hierarchy contradicts the Liskov principle (LSP).

Basically, given the stage we’ve arrived at, square and rectangle might have quite distinct behaviors, therefore it’s more practical to divide them into two classes when discussing LSP.

Design by contract (DbC)

The idea states that software components should have formal, precise, and verifiable interfaces. It does this by laying out a strategy that includes pre- and post-interface criteria. As a pre- and post-condition, Liskov’s behavioral characteristic definition occurs here. The interface is the point at which two objects can communicate with one another, and the pre- and post-conditions govern how these interfaces communicate with one another. This construct may be viewed of as a way to agree on the sorts of objects that should be used.Subtypes can make preconditions weaker and postconditions tighter. Subtypes, in other words, can take values that supertypes cannot, resulting in superior, more specialized outcomes. Inheritance is fundamentally a generalization and customisation process. Subclasses have more specialized types, but superclasses have more generic types. This guideline is based on the idea that the code should not be confused when the hierarchy is reversed. Even if it does not work as much privately as it does privately, it should continue to work.

“There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.”― C. A. R. Hoare

It states that software designers should provide formal, exact, and verifiable interface specifications for software components, which include preconditions, postconditions, and invariants in addition to the standard description of abstract data types. In a conceptual parallel with the requirements and duties of commercial contracts, these specifications are referred to as “contracts.”

The prerequisites in question here are the conditions required for the method to be called by the client. The last conditions are related to the state of the object or the returned values when the method finishes its execution.

The DbC method assumes that all client components that call a server component’s activity will fulfill the preconditions stated for that operation.
When this assumption is deemed too dangerous, the server component checks if all relevant preconditions are met (before or while processing the client component’s request) and responds with an appropriate error message if they are not.

Interface Segregation Principle(ISP)

Clients should not be forced to depend on interfaces they do not use. It is the version of SRP applied to interfaces for high concurrency. If an interface has sets of methods that serve distinct customers, it should be abstracted by separating it into sections dedicated to each client.

Methods that are in a client but aren’t being used should be removed.
It’s preferable to have many but clear and business-oriented interfaces than to have one that collects all the unrelated functions from the standpoint of readability.

Each duty should have an own user interface. As a result, the program that makes use of the interface only looks after those who require it. If we just have one interface for several purposes, we’re introducing more methods or features than are required, which indicates you’re breaking the ISP principle.

“Clean code is simple. A clean code is like a well-written prose. The clean code never hides the designer’s intent, it is more filled with clear abstractions and straight lines of control.” — Grady Booch

Objects should never be forced to implement interfaces that contain property/method etc.

Rather of providing an interface that includes all coffee machines and their features, the ISP should sort them by kind and utilize just the ones that are required. When I want a filter coffee machine, I need an interface that only knows about filter coffee processes and nothing else.

Dependency Inversion Principle (DIP)

The fifth S.O.L.I.D concept is the Dependency Inversion Principle. It explains what has to be done to reduce a system’s reliance on its objects.
This concept states that when working on OOP, we must adhere to the following two rules:
High-level classes should not rely on low-level classes; instead, abstraction or interface should be used to establish a link between them.
Details should not be dependent on abstractions; rather, abstractions should be dependent on details.

High-level modules should not rely on low-level modules for their functionality. Abstractions should be used in both cases. Details should not be dependent on abstractions; abstractions should be dependent on details.

Software that is difficult to adapt is considered bad design. After the adjustments are performed, it causes breaking. You can’t use it again. You can’t test the program outside of the application itself. This is due to the unhealthy modules that have been built between modules.

It is vital to distinguish between higher level and lower level modules in order to comprehend DIP. Higher level modules refer to the fundamental abstractions that make up the application. In this area, we may assess processes and the structures that regulate them. Lower-level modules can be thought of as work-in-progress modules.
Parent modules rely on child modules in methodologies like structured analysis and design. The issue is that it brings a transitive structure into the equation. A modification made at the lower level has an impact on the modules at the higher levels. It comes with it a certain amount of vulnerability.

The upper level modules should have priority in determining the modification, and they should be forced to update the lower level when required. Even in the situation of pass-through, there is no concern in this manner.

“You know you are working on clean code when each routine you read turns out to be pretty much what you expected. You can call it beautiful code when the code also makes it look like the language was made for the problem.” — Ward Cunningham

The approach is to turn concrete build-to-concrete construct relationships into totally abstract ones.
To do this, an abstract supertype of each concrete structure should be constructed, and the dependents of the structures that perform higher-level work should be changed to abstract types.
This creates an abstraction barrier between high-level abstract structures and their details, preventing change from spreading.

I’ll be working and publishing about Test Driven Development in the following stage. Continue to take steps.

--

--