Thursday, May 14, 2020

Algorithmic Design and Data Structure Techniques

When developing a structured program it is vital to know how fast a program will run, for the sake of the users experience.  As a developers we choose between different algorithmic designs and techniques to make the most efficient program possible.

Data structures are applied using algorithms, which manipulate the data of the data structure ("Data Structures: Lecture 2", n.d.).   In order to make our algorithms most efficient, we need to think of what makes a more efficient algorithm.  That is done by having knowledge of time complexity and space complexity.  

Time complexity describes the amount of time a algorithm takes in connection with the amount of input to the algorithm.  The time it takes for an algorithm to process the input is the number of memory assesses performed.  If that being the number of comparisons, the number of times a inner loop needs to be executed ("Data Structures: Lecture 2", n.d.).

Space complexity is the amount of memory used by the algorithm (including the input values to the algorithm) to execute and produce the result ("Space Complexity of Algorithms | Studytonight", n.d.)

Searching and Sorting algorithms are two distinct things, but both correlate together when we consider developing structured programs and the way we use them to retrieve specific data in a set or collection in the program.  An algorithm takes an input instance and turns it to the desired output (Skiena, 2008).  The data input for the algorithm is usually an n-element array, each is part of a collection of data and each has a key that is the value to be sorted (Cormen et al., 2009). 
A sorting algorithm describes the method by which we determine the sorted order in which can be numbers, names etc.   Steven Skiena described in his book The Algorithm Design Manual (Skiena, 2008, p. 103)that sorting is worth our attention for several reasons such as:
  • Sorting is the basic building block that many algorithms are built around. Understanding sorting we obtain an amazing amount of power to solve other problems.
  • Most of the interesting ideas used in the design of algorithms appear in the context of sorting, such as divide-and-conquer, data structures, and randomized algorithms
  • Computers have historically spent more time sorting than doing anything else.


When an array is already sorted and we need to search for a particular element, it will cut down on the time it takes to find the element because we can use different searching algorithms that are good for searching sorted arrays.  Sorting an array takes it’s own amount of time to complete, then we would need to search the array for the element we are looking for.  As the size of the elements in the array increases so does the time it takes to search or sort the array. 

Choosing one design over another is a task that developers deal with on a daily basis and need to keep the benchmarks of time complexity and space complexity in mind when structuring programs.

References
Data Structures: Lecture 2. Cs.utexas.edu. Retrieved from http://www.cs.utexas.edu/users/djimenez/utsa/cs1723/lecture2.html.
Skiena, S. (2008). Algorithm Design Manual, The (2nd ed., p. 103). Springer.
Space Complexity of Algorithms | Studytonight. Studytonight.com. Retrieved from https://www.studytonight.com/data-structures/space-complexity-of-algorithms.

Thursday, April 16, 2020

Object-Oriented Programming with Java: The Beginning

This blog post is a guide of  what java programming is, how to get started with Java programming, and concepts and features of object-oriented programming.

When I began learning Java I was nervous.  I have been learning how to code in Python for about a year and a half now.  Python is a different programming language and has different syntax (the way the code is written etc.).  I began to learn Java from going from the start as if I knew nothing about coding.

I first began by reading about how to install Java on my PC by going to the site Oracle Java documentation.  This site gave me step by step instructions on how to complete this task.  After I downloaded and installed Java on my PC, I thought it would be a good idea to learn about Java and how it works. 

I came across a break down of object oriented programming (OOP) at the site Geek for Geeks , which I have used for a basic tool for information on a variety of computer science topics.  This let me know the concepts of object-oriented programming and it provided a link to more information about each OOP concept.  

OOP has a few concepts in which it goes by to make it work.  These concepts are polymorphism, inheritance, encapsulation, abstraction, class, object, method, and message passing. 

Polymorphism is the ability of program languages to recognize the difference between variables that have the same name (ex. int t, int z).

Inheritance is a major feature of OOP.  Inheritance is the way a class can inherit the features of another class.  This allows for less code writing by being able to reuse class features as a program gets bigger.

Encapsulation can be thought of as a way to keep code inside of a class away from other classes.  the code in java can be labeled as private within the class to keep the code from being altered by other classes.

Abstraction is the way the code is kept from the user.  The user is not interested in what the code says, only the main idea.  Writing code in detail is for instructions and abstraction takes that code and displays the final product to the user.

Class is the draw up of where all of the objects are created.  Class has a way of describing what is in the class such as declarations.  Declarations are modifiers, class name, superclass interfaces and body.  Class is the baseline of OOP.

Object:  Everything is an object in Java OOP.  Objects have a state, behavior and identity.  It is up to the programmer to describe these in a way they can be understood by the compiler and other programmers on their team or future programmers who might need to update the program.

Methods are a group and things that have instructions on what task they will complete.  I will not go into detail on methods.  Methods are basically blocks of code that do something when they are called.  I would recommend you refer to Methods in the link to learn about what methods are in Java and how to create and call them.

Message Passing is how objects communicate with each other in the program.  "Message passing involves specifying the name of the object, the name of the function and the information to be sent".

These are the concept of OOP in Java.  Java is a programming language that has been around for years and provides many features that have evolved and are evolving everyday.  The concepts that I talked about in this blog will help you get the idea of what OOP is and how it is used.  I would suggest going through a tutorial like w3schools that goes through all of what Java has to offer.  It has examples and exercises on each topic to help a newbie work through Java.

As of the date of this blog post I am a few days into reading and studying about Java.  I am learning more everyday about how to construct a Java program.  From the first few days I have seen that I like how Java is statically written.  I believe it helps me be more descriptive with my code and when I look back at lines I have earlier written, I can get a better grasp of what I did (even without comments). 

List of Sites to learn more about OOP and Java:

https://docs.oracle.com/javase/tutorial/index.html

https://www.geeksforgeeks.org/object-oriented-programming-oops-concept-in-java/

https://www.w3schools.com/java/java_methods.asp

Sunday, April 12, 2020

Operating Systems & Design Summary

Computer Operating Systems & Design is a five-week course that I have taken at Ashford University that looked at Operating systems overall role in a computer system (Silberschata, Galvin, & Gagne, 2014).  The computer operating system creates an environment for the user to interact with programs in a computer system and to solve problem for that computer system.  The user can interact with the programs which interact with the operating system that interacts with the hardware of the computer.  The computer has a hardware component called the CPU which is connected through a device controller that offers contact to shared memory. 
The operating system has several operations that keep it functional.  It provides functions to start the computer.  It configures different devices, manages data and programs, provides a user interface and manages memory.  The concept map photo below shows how the operating system interacts with the user, the system and application programs and the computer hardware. 


Computer programs have a process that is accomplished by single or multiple threads ("Process (computing)", 2020).  “A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register set, and a stack.  It shares with other threads belonging to the same process its code section, data section, and other operating-system resources such as open files and signals” (Silberschatz et al., 2014, p. 163).  A process begins with a state which is what is currently happening in the process.  The operating system has five states: new, running, waiting, ready and terminated.  New is when the process is being created.  Running is when the discussions are being executed.  Waiting is when the process is waiting for a event to happen.  Ready is when the process is ready and waiting to be assigned to a processor.  And terminated is the completion of the process’s execution (Silberschatz et al., 2014, p. 107).  The picture below shows on the right how the process states happen.  The concept map below also shows on the left how the process control block is connected to thread execution.  The process control blocks are where the code data and files interact with the registers and stack.  The process is a program that achieves thread execution.  The process must be executed by a single-thread or multiple threads.  In a single thread only performs one task at a time.  The multi-thread execution on the left lets many tasks to be performed at once.  Modern computers have been programmed to use multi-threaded execution to an advantage.  Even though single-threaded execution is still used for some programs.
Memory is essential and the central of operations of all computers this day and age.  The CPU fetches instructions from the memory according to the value of the program counter.  The memory unit only sees a stream of addresses + read requests, or addresses + data and write requests.  There is memory directly built into the hardware (the processor).  Main memory and register are the only storage in which the CPU may access directly.  Each process needs a separate memory space and will search for open address space accordingly (Silberschatz et al., 2014).  As you can see in the main memory node of my concept map, the user space, this is where the CPU hardware compares every address generated with the registers.  The operating system block and the user mode block are separated because any attempt by a program in user mode to access operating system memory or other users’ memory results in a fatal error, which helps to keep the code or data structures separate. (Silberschatz et al., 2014). 
            The CPU generates an address (logical address) which loads into the physical address.  The logical and physical addresses can generate identical addresses.  During execution-time address binding scheme, the addresses can differ.  Logical addresses can also be referred to as virtual addresses.  The picture below shows how a process can be swapped out of memory when it is not being executed and back when it is ready to execute it.  This shows how multiprogramming can be used to an advantage within the memory management system. (Silberschatz et al., 2014). 
Virtual Memory can separate logical memory from physical memory.  This lets a large amount of memory to be provided when there is only a smaller amount of physical memory available.  The use of demand paging by virtual memory only brings in a page when the page is referenced.  This allows for a process to run when the memory image is not in the main memory (Silberschatz et al., 2014).  

            Having access to saved material is one of them most important tools for users.  The file system is where all of this happens.  The file system contains two separate parts, the collection of files and a directory structure.  The file system stores material on storage devices like hard disk, magnetic disk and optical disks.  Physical storage devices store the files for later use, which are mapped by the operating system to these physical devices.  Being nonvolatile allows these physical devices to store the data and allows it to be accessed after the computer is shut down, unlike RAM (random access memory) which cannot (Silberschatz et al., 2014).
            Inside the operating system, there is file system management which keeps the files and data that is saved by the user in order.  The operating systems order of these files is called a directory which groups these files together.  Operating systems use multiple schemes of logical structure of the directory in keeping the files together (Silberschatz et al., 2014).  In the concept map below I was able to display the differences in the logical structures of a directory.  These types of directory structures are single-level directory, two-level directory, acyclic directory and general graph structure.  I put emphasis on single-level directory, two-level directory and acyclic graph directory.
            Single-level directory has all the files in the same directory.  There is not much wiggle room, when the file numbers increase or there is more than one user on the system.  The single directory only allows files in the directory to have different names.  In the concept map below, the directory has different names and a single file in each directory.  Two-level directory allows for each user to have their own directory.  These files are together in that directory and each directory has a single file.  The user may be able to have the same name for their file directory.  In the concept map below, I wanted to emphasize acyclic-graph structure because of its ability for different directories to share files and sub-directories with other directories.  It is good to know that shared file or directory is not the same as two copies of a file (Silberschatz et al., 2014).



Most of what people do on their operating systems is important to them.  Having a model and ways to protect and secure the operating system need to be in place.  To protect a computer system an access matrix is a general model commonly used.  “The access matrix provides an appropriate mechanism for defining and implementing strict control for both static and dynamic association between processes and domains” (Silberschatz et al., 2014, pg. 609).  In my concept map I have a node that shows an access matrix.  Objects are the columns and domains are the rows.  If the row and the column have an access right intersected, then there is that specific access right to the domain.  This lets us see what access right is available for different policies.  I added a node for the capabilities which is like the access matrix.
            The concept map below has a node that represents a MULTICS system.  The MULTICs system organizes the protection domains hierarchically in a right type structure (Silberschatz et al., 2014).  “MULTICS has a segmented address space; each segment is a file, and each segment is associated with one of the rings.  A segment description includes an entry that identifies the ring number” (Silberschatz et al., 2014, pg. 606). 
            With my concept map I made a hierarchy of the four levels security measures and ways to protect that level of security.  The text made a good point when it states, “Security at the first two levels must be maintained if operating-system is to be ensured” (Silberschatz et al., 2014, pg. 636).  This gives a good look at how protecting the operating system before it can be touch by a human is important.  It also shows that only giving a user access to parts of the operating system that they must have, lets way to less mistakes.  In my concept map, I also list of attacks that can be attempted to break security. 



Computer Operating System Theory & Design has taught me many things that I can take into my future career.  I am studying for a degree in Computer Software Technology in which I will like to become a Software Engineer.  Understanding how the operating system works will let me develop software for operating systems better.  It will give me a understanding of how the operating system works beneath the board of the software that is created for it.  The subjects of single- and multi-threaded processes will help me begin to know what type of process works best for each task.  I will take what I learned in this course to be the best software developer that I can be.
References
Silberschatz, A., Galvin, P., & Gagne, G. (2014). Operating System Concepts Essentials, 2nd Edition. John Wiley & Sons.
           

Saturday, November 9, 2019

Tech Topic Connection

Data Management in Information Technology
            Data can be entered into a computer in many ways.  After this data is entered it needs to be ordered and managed to meet the requirements of the user.  As we progress to the future with most data being entered into computers for use with data collection software, it is important to have the data managed in a way to make it easily assessable.  Data management is essential to structure and growth of information technology and computer science.  In this paper I will explain how data management has been used across all aspects of information technology, modern computer systems and how it has evolved over the years as computers have developed
            Computer data storage is a technology consisting of computer components and recording media that is used to retain digital data.  It is a main function and important component of computers.  Data management arose in the 1980s as technology moved to random access storage (also known as random access memory).  “Random access memory (RAM) is a type of electronic storage technology that provides fast data access to support essential computing tasks” (2014).  When a user enters a command via input devices such as the keyboard or mouse, the CPU interprets the command and instructs the hard drive to load the required data into the RAM in order to make it accessible. RAM offers faster read/write speeds than any other type of storage technology used in personal computers (2014). 
            There are many different computer programming languages that produce various kinds of output to a computer with a set of instructions.  Data manipulation language (DML) is a computer programming language used for adding, deleting and modifying data in a database.  A DML is often a sublanguage of a broader data language such as SQL, with the DML comprising some of the operators in the language (Chatham, 2012).   Structured Query Language (SQL) is a widespread data manipulation language which is used to retrieve and manipulate data in a relational database.  A relational database refers to a database that stores data in a structured format, using rows and columns (2019). 
            Many types of database management software have been developed over the years.  Within the realm of database management software, common applications like MS Access, Visual FoxPro or SQL help to handle different kinds of data within their respective databases or data containers. Apart from just taking in data, data management software often contemplates other long-term goals, such as comprehensive security for data, data integrity and interactive queries. Data management software may also look at the life cycle of data to provide security in all phases: during the generation of data, during data storage, and during eventual data disposal. Data managers may need to set time frames for data life cycles in order to control the maintenance burden on a system and to answer key questions about data security and compliance with standards or regulations pertinent to an industry or field (2019).  I recently had the chance to use Microsoft Access to prioritize my daily task.  I prioritized the tasks as high, medium and low for my daily tasks.  I found that Microsoft Access files are large and can be multiple gigs. 
            Data management can be used for in different aspects such as network architecture, management and security.  “Military keeps records of millions of soldiers and it has millions of files that should be keep secured and safe. As DBMS provides a big security assurance to the military information so it is widely used in militaries” (Sharma, 2017).  A company being able to have their data stored in a secure database is essential to the security of important information.  A Integrated database system which was an early network database management system which was used most by industry.  “The network database model allows each record to have multiple parent and multiple child records, which, when visualized, form a web-like structure of networked records. In contrast, a hierarchical model data member can only have a single parent record but can have many child records” (2019). 
            There are many ways for individuals and companies to store and manipulate their data.  Each needs to choose the type of database management system that best suits their task.  Together with the advancement of database software and security practices, data can be kept in good order and secure.  Making the way for a “Big Data” present and future.

Tuesday, November 5, 2019

Network Security

Network Security
            As most of our communications and actions continue to venture online, privacy and security is an important topic to regularly think about.  Keeping the information that one puts in their personal network and outside of their personal network secure as possible is good practice for users of today’s computers.  This can be done by having knowledge of what threats are out there and what others can do to get access to this information.  Organizations and individuals need to be cautious of how vulnerable their information and system can be to outside threats and protect them with their best foot forward.
            We use computers for different activities in our lives.  In the first quarter of 2018, “U.S. adults spent three hours and 48 minutes a day on computers, tablets and smartphones” (Fottrell, 2018).  Having a stable knowledge of how to protect this time from security breaches is important.  A security breach is a case of unauthorized computer access to a person’s private email or social media (Vahid & Lysecky, 2017).  We do things on our computers in our free time and for work that we would like to be only seen by authorized individuals.  When a company’s system is compromised for by a security breach it is possible for the company to lose money in hidden cost such as loss of business, impact on negative on reputation, and employee time spent in recovery.  The financial damage caused by a data breach now cost companies an average of $3.86 billion a year (Weisbaum, 2018). 
            Today there are many different types of attacks that can be executed, and the use of the ping command is one of them.  “The ping command is a Command Prompt command used to test the ability of the source computer to reach a specified destination computer” (Fisher, 2019).  Using the ping command many hackers like to execute the attack known as denial of service.  A distributed denial of service (DDoS) attack is a malicious attempt to make an online service unavailable to users, usually by temporarily interrupting or suspending the services of its hosting server.  Unlike other kinds of cyberattacks, DDoS assaults don’t attempt to breach your security perimeter. Rather, they aim to make your website and servers unavailable to legitimate users. DDoS can also be used as a smokescreen for other malicious activities and to take down security appliances, breaching the target’s security perimeter.
The Ping of death (PoD) attack is attack in which an attacker attempts to crash, destabilize, or freeze the targeted computer or service by sending malformed or oversized packets using a simple ping command.  While PoD attacks exploit legacy weaknesses, which may have been patched in target systems. However, in an unpatched system, the attack is still relevant and dangerous. Recently, a new type of PoD attack has become popular. This attack, commonly known as a Ping flood, the targeted system is hit with ICMP packets sent rapidly via ping without waiting for replies. To avoid Ping of Death attacks, and its variants, many sites block ICMP ping messages altogether at their firewalls. However, this approach is not viable in the long term (n.d.). 
Computer systems are vulnerable to many different security threats.  “On-line systems and telecommunications are especially vulnerable because data and files can be immediately and directly accessed through computer terminals or at points in the telecommunications network” (Laudon & Laudon, 2007).  Computers have security holes and vulnerably but human interaction through social engineering and phishing has become popular.  According to a 2018 study, 17 percent of people fall victim to social engineering attacks and 83 percent of all companies have reported that they experienced phishing attacks (Lopez, 2019). 
Social engineering entails tricking people into giving their confidential information or manipulating them to do something.  There are many types of social engineering attacks with email spam and phishing being a couple examples.  Phishing is typically done when someone is manipulated to login to a site such as their banking account.  This is typically done by a fake email sent to the victim asking them log into the fake online banking account, the attacker then has access to the information needed to access the real account.  People are vulnerable to phishing attacks because phishing emails and websites are well put together to look identical to the real bank.  When the phishing attack is executed, the individual can see money come out of their bank account unexpectedly.  A way for people to not be victims of phishing is by not opening links in emails and be cautious of all communications received. 
Social engineering manipulation has been around for a long time, but it is still a way an attacker uses to get important information.  Employees at a company can receive e-mails from attackers acting like a potential customer or current employees.  The emails may come through an exact email of a supervisor asking for the password to the system as if they forgot the password.  The manipulation can give the attacker access to all the important documents and information in the company’s computer database.  If an employee is asked for important information such as a password, they should go directly to that person to give the password.  Businesses should do continuous education of potential social engineering attacks.  Spending a small amount of money for training of potential attacks can go a long way in the long term.
There are many ways for an attacker to access an individual or company’s private information.  As computer usage continues to increase and new attacks continue to be made, being cautious is a good first step to privacy protection.  Unauthorized access to private information can put stress on individuals mentally and financially on companies.

Computers in the Workplace

                                      Computers in the workplace
Computers have helped advance the production of many industries, specifically manufacturing and production.  “An important component of industrial computers is the programmable logic controller, or PLC, used to automate production and processes. In recent years, improved PAC, programmable automation controller, technology has been developed” (2019).  PCL’s have been used to automate production equipment such as robotic devices, assembly lines.  The PLC also is a fault device that tells the human when a production equipment has faulted and needs attention. 
“The PLC receives information from connected sensors or input devices, processes the data, and triggers outputs based on pre-programmed parameters.  Depending on the inputs and outputs, a PLC can monitor and record run-time data such as machine productivity or operating temperature, automatically start and stop processes, generate alarms if a machine malfunctions, and more. Programmable Logic Controllers are a flexible and robust control solution, adaptable to almost any application” (Unitronics, 2019).  The PLC receives the information from the input of a human from the Human Machine Interface (HMI).  The HMI can be simple displays, with a text-readout and keypad, or large touchscreen panels and they enable users to review and input information to the PLC in real time.
It was from the automotive industry in the USA that the PLC was first used to increase productivity.  Before PLC’s, the change over to new yearly models was a longer process.  The industrial PC as we know it today appeared in the 1980s.  Industrial computers are more rugged than home computers, designed for use in environments that may be hot, cold, dusty, or wet, and their software does not require updating as frequently (2019). 
The advancement of computers in the manufacturing industry has brought about conversation of job loss for humans to robots.  “Approximately 50% of Flex’s manufacturing processes are already fully automated. Automation enables a level of accuracy and productivity beyond human ability—even in environments that would be considered unsafe for humans. The new generation of robotics is not only much easier to program, but easier to use, with capabilities like voice and image recognition to re-create complex human tasks. Another advantage of robots is that they do precisely what you ask them to do – nothing more, nothing less. And while automation eliminates some of the most tedious manufacturing jobs, it is also creating new jobs for a re-trained workforce” ("Five Trends For The Future Of Manufacturing", 2019).
Computers in the manufacturing industry have made factory jobs easier for the humans that work on the factory floor.  I have worked in multiple factory setting in the last 15 years and I love how computers have cut down on the physical work.  I currently work at a solar panel manufacturing factory and the process it takes to make the solar panels would not be possible without computers.  Computers helped greatly with my job as a manufacturing operations technician, in that most of my job is to monitor and troubleshoot robots.  I have seen how the amount of less physical work makes my co-workers and myself do less physical work which leads to less stress on the body as people get older.  




Traveling Through a Network

Ping and traceroute are useful tools for viewing and troubleshooting internet connections.  “A computer communicates via the Internet by sending a packet, containing information like an address for a destination computer, the data size, and the data itself” (Vahid, F., & Lysecky, S. 2017).  I used ping and traceroute to see the results of three different websites, google.com, smh.com.au, and ameblo.jp.  These websites are based in three different countries, United States, Australia and Japan respectfully. 
When I used the ping command I was presented with multiple lines of output.  Ping gave me the domain name and corresponding IP address of the destination I entered.  It also gave me details of the echo replies received from the destination, statistics showing what happened to the packets and the range of round-trip times to receive echo reply.  The results I received from the three different websites in different countries the millisecond for roundtrip for the foreign websites (smh.com.au and ameblo.jp) was faster then google.com. 
When I used the traceroute command I was presented with another display of multiple lines of output.  The information given using traceroute was different then ping and provides more detail.  The extra amount of information given in the traceroute command is the exact routes taken to each server.  The relationship between the roundtrip time and geographical location is not exactly related.  The road trip time is determined on the number of routers between me and the target.  Ping and traceroute commands can be used to troubleshoot internet connection problem.  Traceroute being of good use because it is helpful to tell where the network connection slows, and congestion occurs (2019).  Knowing when and where a time out occurs can be useful in troubleshooting an internet connection.  Receiving a time out or error response can happen for multiple reasons.  If a timeout is received the host maybe down or unreachable at the time.  The ping command can also be disabled by the host system administrator.
Here are screenshots of me using Ping and Traceroute for two different websites in two different geographical locations (google.com and smh.com.au).  First example is using PING.


These second screenshots are using Traceroute