Discussions

 

These are some unedited discussions from my graduate classes. Some people may find these interesting.

 

RAD & SDLC Models

11/22/2014 4:47:00 PM by Eddie Jackson  email me

When implementing a new system, there are many different models that a company can choose to follow (SDLC, RAD, Prototyping, etc.) Do some research to find a case study on one of your models? What was the project? What tool did they use? Was their implementation successful? Compare and contrast two different models and then explain when each might be used. What are the benefits and drawbacks for each model?

The model I have chosen for further research is RAD, or rapid application development. The company that I found implementing the RAD model is FileMaker. FileMaker has a business efficiency platform, which is enterprise software that helps businesses streamline processes; this includes reporting, charting, project tools, and multi-platform sharing (FileMaker, n.d.). The implementation of RAD by FileMaker was successful. The company’s angle of using RAD was return on investment (ROI). The entire white paper I read was about ROI being “supercharged” using the concepts of RAD. For example, FileMaker (using RAD) replaces manual design and coding with automated techniques and strategies. Additionally, the FileMaker platform breaks up projects into smaller components, to allow each part to be worked on or managed at the same time; FileMaker calls this process “spiral development.” The implementation of RAD in FileMaker has several benefits—they include faster development cycles; codeless application creation; integrated design environment; reduced training requirements; integrated database; multiple external database connectors; and cross-platform support (FileMaker, n.d.). I did include the link to the white paper in the references; feel free to check it out.

How FileMaker justifies a positive ROI, is due to RAD reducing overhead on projects, which helps developers create applications faster and to make modifications to the project much easier. Likewise, RAD is able to reduce costs associated to operations by allowing applications to continue to be managed and administered more efficiently, even after the applications have been implemented.


Compare and Contrast

In my compare and contrast, the two models I have selected are rapid application development (RAD) and the system development life cycle (SDLC). In a RAD model, the focus is including the customer/client to speed up the delivery of design and implementation of a product. In a SDLC model, the primary focus is efficiency, documentation, and effective management of a project. RAD (or agile) can provide immediate results by using a phased approach to development; this includes business, data, and process modeling, application generation, and testing and turnover (ISTQB Exam Certification, n.d.). In contrast, SDLC uses design/development, testing, and implementation—commonly referred to as an incremental approach. While each model is meant to speed up the delivery of a product or service, they each have their advantages and disadvantages. In a RAD model, advantages would include: reduced time of development; increased reusable components; fast initial product reviews; real-time customer feedback; and integration is well-thought about right from the beginning (ISTQB Exam Certification, n.d.). A few disadvantages of the RAD model are: the model is dependent on strong, experienced teams; business requirements must be properly identified upfront; team members are required to have modeling skills; and not very practical for long-term projects; it best used if project timelines are 2-3 months (ISTQB Exam Certification, n.d.).

Likewise, the SDLC model has advantages and disadvantages. The advantages of a SDLC model include: creates working software quite quickly; it is very flexible to project changes; easy to test and dubug; and easier to manage associated risks due to having the time, as well as the built-in process of iteration(ISTQB Exam Certification, n.d.). The disadvantages of the SDLC model are: needs proper planning and design; the whole system, product, or service must be clearly defined before it can be assessed; does not include the customer input upfront and throughout the project.

The best time to use a RAD model is when time is limited, not too many changes will be made to the project, and when customers need to be, or should be, included in the design process. SDLC is best used when timelines are extended, detailed documentation is required every step of the way, and where there may be many changes made in the project. Both models are about getting the product or service to market faster, are dependent on understanding the needs of the business, and clearly defining the variables in the project.


Personal

In my own professional experience, I have learned that it is not any one model that will always be the answer; models have to be considered tools, and the proper tool must be utilized for the specific task at hand. This means that companies, developers, and even customers should realize there will be governing dynamics of a project that define which resources should be implemented and how those resources should be applied. Each project should be assessed and designed in a way that allocate the proper resources, using the right people, and of course employing the appropriate business models. Not only is this the best strategy to deliver a high quality product to the customer, but this will also reduce the numerous inefficiencies, prevent possible pitfalls, and slash unnecessary waste associated with product delivery.


References

FileMaker. (n.d.). Supercharging return on investment with rapid application development tools. Retrieved from https://www.filemaker.com/downloads/pdf/it_tco_wp.pdf

ISTQB Exam Certification. (n.d.). What is Incremental model- advantages, disadvantages and when to use it? Retrieved from http://istqbexamcertification.com/what-is-incremental-model-advantages-disadvantages-and-when-to-use-it/

ISTQB Exam Certification. (n.d.). What is RAD model- advantages, disadvantages and when to use it? Retrieved from http://istqbexamcertification.com/what-is-rad-model-advantages-disadvantages-and-when-to-use-it/

 

 

UCITA

1/14/2015 1:39:21 PM by Eddie Jackson  email me

Suppose the legislature in your state is debating the adoption of UCITA and you have been called as an expert witness. Based on your research, decide whether you would recommend adoption of the UCITA or not. State your position. What are the three most important ideas you want your legislators to get from your testimony?

The Uniform Computer Information Transaction Act (UCITA) is a type of contractual law, which specifies how digital information should be governed. The proposed law would provide a set of universal standards that could be applied to software licensing and other forms of digital property, such as music and movies (Andersen, Raymakers, & Reichenthal, 2001). As an expert witness, I have been charged with the responsibility of advising my state on the practicality of UCITA, and whether or not this legislation should be enacted within the state. It is my position, that UCITA is not practical, and thus should not be implemented in my state. There are three reasons why I believe UCITA is not a viable option.

First is scope and vagueness. Because UCITA is being provisioned as a “universal” standard, it would seem like the wording of the law should be relatively simple, so that everyone being affected by the legislation can understand it; the law is actually three hundred pages of text––way too long for the general population (Andersen, Raymakers, & Reichenthal, 2001). Likewise, because the law is so large, there is the chance that software companies will find loopholes and exploit them, thus negatively affecting software usage and the consumer experience of the digital property. Furthermore, the language used in the legislation is intentionally vague, as to be compatible with other laws. Supporters of UCITA say the law was only meant to be a set of guidelines; however, I believe the legislation is trying to give more power to distributors, while not factoring-in the rights of the consumer. This will lead to poor business practices by digital property distributors and publishers; ones that impose unfair licensing and encourage the sale of defective software (Huggins, 2014).

The second reason I do not support UCITA is due to liability. The legislation specifically has included clauses that permit companies to escape responsibility for buggy software. The thought process behind this, is that software companies should be given certain liberties when it comes to the marketing and selling of software. The problem with this thinking is that companies may [will] release software knowing that it was not ready for shipment. The burden of defective software would be place on the consumer, which is not only poor business practice, but also unethical (Huggins, 2014).

The third reason I do not support UCITA is because of freedom of speech. The UCITA legislation contains wording that suggests customers do not have the right to post information or make public statements about the company’s software, or other intellectual property, without the expressed permission of the company (Andersen, Raymakers, & Reichenthal, 2001). It is obvious that the legislation is trying to extend its reach well into the civil liberties of the consumer. Even though there may be instances where a person should not post information online, for example if they are beta testers, the UCITA legislation is trying to suppress other rights of the consumer, which is unethical. For these three reason, my advice is to pass on the UCITA legislation.


References

Andersen, H., Raymakers, J., & Reichenthal, J. (2001, March). What is UCITA? Retrieved from
http://cs.stanford.edu/people/eroberts/cs201/projects/2000-01/ucita/index.html

Huggins, James. (2014, May 10). UCITA: Uniform Computer Information Transaction Act. Retrieved from http://www.jamesshuggins.com/h/tek1/ucita.htm

 

 

Privacy Cases

01/04/2015 10:41:31 AM by Eddie Jackson  email me

Research on the Internet for legal cases that provide examples of cases that contain ethical issues related to privacy. Analyze the two cases that you found most compelling.


Case 1


We like to think the more technology we have, that the less privacy we possess; this is not always the case. In my research, I found two legal cases that increased security—and thus, preserved privacy. In the first legal case, Riley vs. California, the case centered on whether or not police officers had the right to search a person’s cellphone upon routine traffic stops. In one particular instance, officers pulled over David Riley on an expired car registration. When they searched his cellphone, they were able to determine he had ties to a street gang, and eventually connected him to a shooting. Mr. Riley ended up going to prison for fifteen years. The Riley case went right to the Supreme Court, which ruled that what the officers did was unlawful, and required a warrant. In this case, it was obvious to me that the officers had carried out a search and seizure, which is protected by the Fourth Amendment. Towards the end of the legal case, I read that the Justice Department argued a cellphone was no different than a person’s wallet. The Chief Justice did not agree, and now, a warrant is required to search a person’s wallet (or purse). I would like state my own thoughts here. I understand the Fourth Amendment, and why it is important; however, if you do not have anything to hide, why would anyone care if they were searched? This Mr. Riley was a felon that was brought to justice. I am sure the streets are safer without him on them. Just my thoughts. (Liptak, 2014)


Case 2

In the second case, Federal Court vs. Google, the case centered around Wi-Fi wiretapping, and whether or not Google had the right to capture wireless data for its own use. Let me start by saying, I thought Google’s mantra was,” Don’t be Evil.” To me, this case is black and white; Google is guilty of wiretapping private citizens. The short of the case, is that the privacy of citizens is not so easily dismissed.

If you have ever used Google Street View, it presents photographic street layouts of the world. I must say, it really is amazing technology; however, shockingly, Google has not only been recording street views—they have also been capturing all wireless traffic. In dozens of countries, Google has found hotspots, and captured that data for later use. Google does not deny this fact (although they did at first). Google argues that capturing “public” wireless transmissions cannot be considered wiretapping; that Wi-Fi is more like radio communications, which are not covered under wiretapping laws. Google’s actions have spawned dozens of civil suits, which the state of California is trying to have condensed into one class-action lawsuit. Google continues to debate that Wi-Fi is not wiretapping, but the court has different thoughts. The federal court believes that Wi-Fi may be open to the public, but the transmission itself is not so easy to access, and for good reasons. Someone accessing the wireless bit stream can track a person’s every move; in fact, trace every keystroke they make, and every website they visit…along with capturing usernames and passwords. It is this data, this information, that would be considered private, and thus Google is guilty of evading the privacy of citizens. Might I add, this is not the first time Google has been charged with criminal activity. There have been cases of copyright infringement (selling trademark keywords), allowing Canada to sell pharmaceutical drugs to Americans, and a long list of “gray area” ethical dilemmas. (Streitfeld, 2013)


References

Liptak, Adam. (2014, June 25). Major Ruling Shields Privacy of Cellphones. Retrieved from http://www.nytimes.com/2014/06/26/us/supreme-court-cellphones-search-privacy.html?_r=0

Streitfeld, David. (2013, September 10). Court says privacy case can proceed vs. Google. Retrieved from http://www.nytimes.com/2013/09/11/technology/court-says-privacy-case-can-proceed-vs-google.html

 

 

SOPA

12/26/2014 02:43:07 PM by Eddie Jackson  email me

Recently, the government attempted to restrict the trafficking in intellectual property by introducing the Stop Online Piracy Act (SOPA) and the Protect IP Act (PIPA). Both of these proposals were extremely controversial as many viewed them as a restriction on first amendment rights. Select one of these bills and research it thoroughly. Explain the purpose behind the bill and why it was controversial. Then, explain whether you feel that it is an illegal restriction on the freedom of speech, or if it is a necessary law to protect intellectual property rights. 

When we talk about digital rights, we are talking about piracy, intellectual property, and constitutional rights. The idea that a “digital product” is not governed by the same laws that protect off the shelf products is highly controversial. There are those, mainly the product owners and distributors, which argue that digital piracy is actually a worse crime than retail theft. Digital piracy has the ability for a stolen product to reach tens of thousands, if not millions, of people. Thus, the potential losses in sales are far greater.


Piracy


This is why there has been a push for federal regulation of digital piracy. To enforce the laws that govern intellectual property, legislation has been proposed. This legislation can been in the SOPA bill, or Stop Online Piracy Act. What SOPA permits the government to do is shut down websites where illegal content is being hosted. Additionally, the SOPA bill gives the government the power to regulate, even shut down, websites that fund, link to, or have visitors with the intent on downloading illegal content (Schatz, 2012). Furthermore, SOPA provides the Justice Department with the legal ability to have Google remove links from its search engine and to block sites at the DNS level—meaning URLs would be blocked if someone was trying to get to the website in an Internet browser (Schatz, 2012).


Contrast


Of course, this has drawn much criticism from the opposing side, which say this is a direct violation of their First Amendment right—freedom of speech (Cornell University Law School, n.d.). Also, because the language in the SOPA bill is so vague, there is the possibility that the U.S. government could censor websites without evidence, or ones that did not realize pirated content was being hosted. This could be a major problem for many popular social networking sites, such as Facebook, eBay, Amazon, and even LinkedIn. Can you imagine going to Facebook or LinkedIn and there being a Federal Government shut down notice posted? This is exactly what those who oppose the bill, and bills like it, are concerned about.


Solutions


This leads us back to how exactly do we protect digital rights? Well, there are digital signatures, activation, certificates, and a long list of ways to encrypt data streams (Lock Lizard, n.d.). However, the burden of securing digital rights stills falls on the owners, not the government. I honestly think the music and movie industries are lazy, and are looking for a federal solution to a global problem; it does not exist.


Personal 


When thinking about legislation that protects digital rights, I believe laws are—and can be—abused; we have way too many laws already. Is this violation of someone’s First Amendment right? No, I do not think so. Should there be legislation that protects digital rights? No, I do not think so. If the government wants to make more laws, they should mandate that the burden of securing digital rights be placed on the owners and distribution companies. I am sorry that piracy exists, but cramming more laws down the throats of citizens is nothing more than tyrannical oppression. We have enough of that already. But what about the poor recording and movie industry? They are losing billions of dollars. Well, lucky for them, they have billions to lose. They should spend their time and effort in protecting their assets. There is zero proof that laws will stop global, world-wide piracy. In fact, every time the government tries to stop one site, ten more sprout up. Remember Napster? Yeah, I remember as soon as that site was taken down, literally a dozen new ones popped up. Thus, this leads me to believe SOPA would be failure, and only be successful at instituting more laws that cannot possibly be enforced.


References

Cornell University Law School. (n.d.). First Amendment. Retrieved from http://www.law.cornell.edu/ constitution/first_amendment

Lock Lizard. (n.d.). Our DRM technology. Retrieved from http://www.locklizard.com/digital_rights_ management_technology.htm

Schatz, Amy. (2012, January 18). What Is SOPA Anyway? A Guide to Understanding the Online Piracy Bill. Retrieved from http://www.wsj.com/articles/SB10001424052970203735304577167261853938938

 

 

Technology: Good or Evil?

12/20/2014 06:37:23 PM by Eddie Jackson  email me

Some say that no technology is inherently good or evil. Rather, any technology can be used for either good or evil purposes. Research ways in which technology has helped or hindered society. Read at least two articles for each view (total of four articles). Summarize the decisive points for each side. What are your thoughts on how technology has helped or hindered society? Choose one side of the debate and explain your position.

We, as humans, love to place labels on things—pretty much everything actually. It has to be good or evil; successful or unsuccessful; politically red or blue; religious or nonreligious. When in reality, most of those ideas only exist to serve our lazy brains. I say lazy, because to properly analyze every single situation, person, or event in life would require too much brain power, so our brain takes shortcuts. This is where people can view technology as inherently good or bad. Of course, the notion that a tool could be evil is ridiculous, however I can understand why it would be easier to shift the blame to an inanimate object or an abstract concept. Why accept responsibility, when you can just blame the computer? Or, the phone? Or, how about let us just broad stroke it, and blame technology in general.

In my research, I found two articles that support how technology has helped us. The first article is How Technology Has Bettered Our Society, by MoWeble. The article discusses a wide range of technology advantages, which include communication, agriculture, medicine, shopping, payment, education, gadgets for personal use, and business (MoWeble, 2013). One of the more important points in the article addresses communication. If you think about it, the world has never been more connected; from social networking sites, email, and instant messaging, to telephones, cellular phones, and voice-over-IP. The more the world becomes connected, the faster knowledge can spread. The article also goes into medicine, that is, medicine that has been affected by technology. Many diseases that were fatal not so long ago, have now been completely cured, or at least treatable; all due to technology (MoWeble, 2013).

Another article that I found which supports how technology has helped us is 8 Ways Technology Is Improving Education, by Sarah Kessler. Kessler (2010) evaluates technology in a positive light, and discusses the effects of technology in eight areas– each of these areas are being used to improve education. The eight areas are: better simulations and models; global learning; virtual manipulatives; probes and sensors; more efficient assessment; storytelling and multimedia; e-books; and finally, epistemic games. Several of these areas in education use technology to cross geographic boundaries, and bring people together. For example, global learning employs the use of video conferencing to setup a language exchange (Kessler, 2012). Students can use this technology to learn about another country’s culture, language, and religion.

Of course, there is an alternative point of view; some people believe that technology hurts society more than it helps. I do not agree with this, but nonetheless found two articles that support this claim. In the article, Are Technological Advancements Hurting Society?, by Stephanie Wooster, the author states two ways in which technology is hurting society: (1) Because people no longer get their news from traditional sources, such as newspapers and television, people are reading the headlines on the Internet, which may or may not be the whole story; (2) Because the Internet is a vast source of information, the author claims that it is too much data for humans to handle, thus our brains go into a sort of shutdown mode (Wooster, 2013). After reviewing both of the authors suggested reasons why technology is hurting society, I would say, I disagree with both. Addressing the first issue, people who scan headlines on the Internet, there is no guarantee they would not scan headlines on television or in a newspaper; if a person is going to be lazy, the medium is irrelevant. Secondly, yes the Internet does contain a lot of information; however, no one is reading the whole Internet. Most people, like myself, have favorite websites that get visited often; and, unless required, most of the time we do not venture outside of the norms. Another article I found that supports the idea that technology hurts us is Four Ways Technology Hurts, Not Helps Us, by EE Staff. The article discusses four reasons why technology is bad; they are: loss of retail jobs; it’s not personal, it’s the Internet; 140 character thinking; and interpersonal breakdown (EE Staff, 2012). Each area in the article elaborates on how technology is the cause for some human shortcoming; for example, interpersonal breakdown, blames mobile phones for disturbing people’s dinners at a restaurant. Of course, the notion that it is the technology that is at fault is laughable. Another issue the author has with technology is that it has encouraged the loss of jobs; the cited retail stores were Blockbuster, Borders, and Barnes and Noble (EE Staff, 2012). I would like to point out, the only reason those businesses were having issues, because there were “online” jobs created; the jobs shifted–they did not disappear into the ether.

You can obviously tell where I stand on technology.  :-)


References

EE Staff. (2012, January 6). Four ways technology hurts, not helps us. Retrieved from http://blog.experts-exchange.com/ee-tech-news/four-ways-technology-hurts-not-helps-us/

Kessler, Sarah. (2010, November 22). Eight ways technology is improving education. Retrieved from http://mashable.com/2010/11/22/technology-in-education/

MoWeble. (2013, March 26). How technology has bettered our society? Retrieved from http://www.moweble.com/how-technology-has-bettered-our-society.html

Wooster, Stephanie. (2012, January 15). Are technological advancements hurting society? Retrieved from http://tv.cos.ucf.edu/blog/?p=5855

 

 

Business Function in IT

11/08/2014 2:07:35 PM by Eddie Jackson  email me

Provide an example of how a business function with which you are familiar (e.g., marketing, finance, operations/productions, accounting, or human resources) can be highly dependent on IT. With regards to the business function that you chose, what are some of the critical or limiting characteristics imposed by the technology? You might consider characteristics such as RAM, speed, connectivity, etc.

Information technology, as defined by Business Dictionary (n.d.), covers a wide range of topics including methodologies, software development, voice and data communications, and system design, analysis, and administration (Business Dictionary, n.d., para. 1). The specific business function I would like to talk about is automation, as in automating software deployment and computer administration for an organization. The reason I have chosen automation is because it is something that I know, and it is increasing in importance inside most companies. First, let us talk about the old days, and how things used to be done. It was not too long ago when a new office suite would come out, or the latest application update, the IT department would scramble to get the software installed on every machine at the company; usually this entailed a CD or network share where the software would be installed manually. Many software applications had complex setup configurations, thus detailed documentation would have to be created (and maintained) for the computer technician to follow. To say the least, this was cumbersome and prone to human error–for instance, if the computer tech missed a step, or perhaps did not have access to install the software, the application could not be installed. As companies grew though, this “manual” process quickly became obsolete and was eventually replaced with back-end, desktop management software.

Enterprise software such as System Center Configuration Manager (SCCM, once called SMS) and LANDesk became critical to a maintaining computer system updates and installing software (Iowa State University, n.d.). Of course, as great as these desktop management suites were, there were not without problems. For one, most of these management suites did not inherently do all that much, not without significant configuration, and many times you would have to purchase extra modules. However, as these applications matured, more and more functionality was built into the core components. Features such as inventory control, software compliance, OS provisioning, and software distribution became standard features (LANDesk, n.d.). This is where I step in. Software distribution and OS provisioning are great, however still require heavy customization and modification to meet the specifications and standards of the company. You would think companies like Apple, Google, and Microsoft would make application distribution easy; they do not. In most cases, plain, vanilla installations are out of the question and require company branding, custom settings, and automated (0-touch) installations to be designed and implemented. This is what I do…I create automated installations that meet all the requirements of the company. Sometimes this means removing pop-ups, hiding serial numbers, and adding company branding, and other times it means creating a GUI for the users to interact with, built-in tracking, and adding customized settings (pre and post-installation). Enterprise tools such as SCCM and LANDesk are essential to business operations, but when paired with software automation, their usefulness increases tenfold.

What is really interesting about the field of automation is that it covers a wide range of skills and software applications. I know I was talking about LANDesk and SCCM, but automation is where the magic is at (and really, desktop management falls under automation). There are plenty of other tools that can be used in automating business desktop operations–such as Microsoft’s Active Directory, PowerShell, Visual Studio, ExeScript, New Boundaries’ Prism Deploy, Spiceworks, Altiris, VBScript, Windows Update Services, Symantec Endpoint Software, and numerous others. The important thing to remember is that usually multiple tools (and skill sets) will be required to maintain a network properly.


References

Business Dictionary. (n.d.). Information technology. Retrieved from http://www.businessdictionary.com/definition/information-technology-IT.html

Iowa State University. (n.d.). System Center Configuration Manager (SCCM). Retrieved from https://www-it.sws.iastate.edu/services/sccm

LANDesk. (n.d). IT systems management you can count on. Retrieved from https://www.landesk.com/products/management-suite/

 

 

Data Mining

11/14/2014 3:00:49 PM by Eddie Jackson  email me

What is Data Mining? The first step to address this question is to perform adequate research via the Web and/or library. Using what you have found on the Web, please address the following questions in detail. 

Data mining describes the process of assessing and analyzing data which can support product cost changes and altering services in a way that can positively impact the company’s bottom line (UCLA, n.d.). When data mining is used correctly, it can give a company a market advantage, and it does this about allowing a business to recognize problems in processes or products, and to draw direct relationships between customer satisfaction and the services or products the company is offering. The concept of data mining is not new, as companies have been using it for decades to analyze customer data. But, what has changed is computing power and the maturity of the field of analytics. Data mining is no single idea; but, actually contains a wide range of concepts, such as AI (specifically, machine learning), the mathematical principles of statistics, and database research (Pregibon, 1997).

If we consider examples of poorly implemented data mining and bad practices, there are at least eight common mistakes; a few are cover here (Hinman, 2014): One, a company decides to implement a data model where the data has not been prepared, cleaned, or transformed. Two, the company does not have much time, so no documentation is created documenting the model or data being used. Three, rather than hiring data mining professionals to analyze the data, the company assumes the algorithms used in the software will give them all the answers (and that the answers will be correct). Four, because the company is finding patterns in random data, they assume the patterns are positive and can benefit the company. Five, rather than removing or deleting non-essential data, the company decides to store all data, and perform pattern matching based upon more complex algorithms.

Of course, there are good ways to implement data mining and common best practices (Hinman, 2014)—they include: One, plan on all obtained data to be in raw form, and thus needs ETL (extract, transform, load) time. Two, always clearly define the variables and what information needs to be extrapolated from the data. Three, allow those involved in the data mining process to ask questions, and then follow up on those questions. Four, always try simplifying solutions rather than making them more complex. Five, crosscheck data mining results with second or third techniques. Six, document the model, the variables, and clearly define exactly what is required and expected from the data mining.

Now, if we consider strengths and weaknesses of both of these approaches. In the poorly implemented data mining strategies, you will notice no thought was given to the design of the model. Data mining is based upon models, and thus the model should be properly designed. This would include preparing the data for data mining, cleaning and transforming data, as well as identifying unknown variables and outliers. For data to be usable, it must be properly prepared, otherwise you end up with dirty data…which means [most likely] the results will be negatively impacted by the bad data. Another issue with the poor practices was documentation, or lack thereof. Because the company did not have time, they decided to forego creating documenting; big mistake. Documentation is required for the continued success of any data mining project. Knowing the details of the model, the variables used in the algorithm, and information related to the data mining project will only aid in the overall success of the endeavor.

What did I specifically look for in each of these items? Success. I looked at whether or not the data mining practice was considered a good practice, and whether or not that practice leaned towards positive or negative results.


References

Hinman, Heather. (2014/3/1). The Do’s and don’ts of data mining. Retrieved from http://www.kdnuggets.com/2014/03/data-mining-do-and-dont.html

Pregibon, D. (1997). Data Mining. Statistical Computing and Graphics, 7, 8.

UCLA. (n.d.). Data mining: what is data mining? Retrieved from http://www.anderson.ucla.edu/faculty/jason.frand/teacher/technologies/palace/datamining.htm

 

 

Custom Software

11/29/2014 2:29:37 PM by Eddie Jackson  email me

There are many benefits to purchasing software as opposed to having it custom built, however, there are many drawbacks as well. From a project manager’s perspective, what are some of these benefits and drawbacks? Remember to consider the project manager’s role in system implementation and explain your comparisons as they relate to a project manager’s responsibilities.

When choosing software, companies can either acquire software from a 3rd party vendor (external source) or create the software themselves (in-house development). There are advantages and disadvantages to both, and each pose problems when it comes to project management. For example, several advantages of developing software in-house would be: there are [perceived] savings; there is 100% control over the features; the business can build the software around corporate strategy (and not the other way around); the software can be developed to fit current systems; and finally, interfaces can be created for ease of use (Clydebuilt, 2012). The disadvantages of developing in-house are: sometimes creating software from scratch means building things that already exist (which can require a lot of [wasted] time); problems can arise in development that cause the development time to drag on forever; the software could be potentially error-prone, and may not be very scalable; cost of the software could easily escalate; and finally, the software team may not have the skills to deliver exactly what the company is looking for (Clydebuilt, 2012).

Both of these approaches present interesting issues for project managers. For instance, when developing in-house, project managers will need to clearly define variables in the project. If project variables are not properly defined, operational and technical problems can arise further into the project, or worse, during go-live. Because newly created software is prone to be buggy (meaning, issues will need to be fixed that pop up throughout the software life cycle), the project manager will have to account for debugging time in the project timeline (Business Bee, n.d.). Likewise, because the software is being created in-house, the project itself could take much longer to complete; the project manager will need to tightly manage timelines, and to include project constraints that may affect project deliverables and project scope.

When it comes to an off-the-shelf solution, project managers still have to deal with multiple project concerns. For example, even though the 3rd party software may be tried and tested, there could be many features or modules that would be unnecessary for the project at hand (Business Bee, n.d.). Consequently, these “extra” features could prove costly. Additionally, the software could have integration issues and require expensive support and maintenance costs. The project manager will be responsible for keeping the costs within budget, understanding how the software will integrate into current company systems, and managing timelines accordingly. In some cases, an off-the-shelf solution is a better choice–mainly because it does reduce implementation time, offers an immediate solution, and usually provides real-world, working examples of how the software works. Also, there is the added benefit of sales and technical support to help with issues as they arise, which could be highly important to a project’s timeline. In contrast, in-house development will provide a project with the element of customization…but it will have to be properly managed by the project manager, or project scope and costs could be negatively impacted. I believe the best strategy should be to hold a meeting discussing time, costs, and TCO of both software approaches; however, being a developer myself, I like engineering solutions for the company, and lean towards in-house development.


References

Brown, C. V., Dehayes, D. W., Hoffer, J. A., Wainright, M. E., & Perkins, W. C. (2012). Managing information technology (7th edition). Upper Saddle River: Prentice Hall.

Business Bee. (n.d.). The pros and cons of developing your own software versus outsourcing. Retrieved from http://www.businessbee.com/resources/technology/software/the-pros-and-cons-of-developing-your-own-software-versus-outsourcing/

Clydebuilt. (2012/5). Developing in-house vs. off the shelf. Retrieved from http://www.clydebuiltsolutions.com/wp-content/uploads/2012/05/Inhouse-VS-Off-the-Shelf-May.pdf

 

 

Global Challenges

12/06/2014 3:03:53 PM by Eddie Jackson  email me

When companies want to go global, what challenges may they face? Conduct some research on other global companies. How do they handle these issues? What are some of the most successful initiatives that you found in your research?

Going “global” can present many challenges, which encompass distance, culture, communications, priority levels of workflow processes, and intellectual property rights (Cross & Bonin, 2010). In my research, I came across an article published in the Oracle Magazine, which discusses some of the feelings of trepidation associated with global products. The article unfolds by discussing part of speech given by Jerry. Jerry was giving a pep-talk, when he delivered the news that the company had just closed a deal with Japan, meaning Oracle would now be moving into the database markets of Japan (in a big way). Jerry seemed so enthusiastic, however other employees (especially the developers) did not share this enthusiasm. Why? One employee, Scott, pointed out that the Oracle software was not meant for international markets, and thus would be an utter failure unless they figured out how to deal with a mountain of issues. What were some of the issues? Scott and his team had to work on currency, date and time, relative language strings in the database, and address messages and how error handling appeared on the screen (basically language) (Hardman, 2005). Oracle dealt with the issues by learning about Japanese culture, currency, time zones, and language—which was no easy feat. Even when the database issues had been resolved, moving into other markets still presented challenges of its own; for example, understanding the business laws of Japan, to make sure they didn’t violate laws or offend the country.

Another company (a brief example) that deals with global diversity is Microsoft. Microsoft discusses country, region, and language boundaries as a “one world” initiative. Because Microsoft literally spans the entire globe (personnel and products), the company has acquired a strong global awareness. To properly deal with the global variables, Microsoft makes sure the company employs people from around the world (this is a great first step). Next, Microsoft uses technology to bridge gaps in culture, language, and geography. For example, all Microsoft products have built-in language packs, conversion software, and localization features (Microsoft, 2014). This ensures that no matter where an employee is, a common platform can be shared in their native language. I personally have experience with Microsoft SharePoint. SharePoint is a shared platform that allows pretty much anyone, from anywhere, to share information, Word, Excel, and PowerPoint documents. Developing global objectives can help a company better understand international markets. Microsoft acknowledges its core objectives are cultural diversity, language experience, and world readiness (Microsoft, 2014). They use these objectives to guide the development of products, and to create an organizational philosophy that all employees follow.


Personal

I know for me, I recently had to deal with a time zone issue in a coding project. My project was to create a piece of software that would allow “elevated” security context, meaning a restricted user that had the proper access code, could launch a program setup, like a printer driver installation, and not be prompted for credentials. So, I created the software and it worked great—for national users. The mathematical formula I used to create unique access codes was dependent on same day, same week, and same month variables (among other permutations). Once the program reached other countries across the Atlantic Ocean, the time zone shift was +8.5 hours. This caused many of the access codes to be generated for the next calendar day, which of course did not work for national users. I had to figure out a formula that would allow the access codes to remain unique, but to work for all time zones. It was a challenging project, but as you can see, time zones can affect more than just language or communication. I worked directly with national and international contacts to test a workaround, which eventually ended in a working solution.

So, what does this all mean? It means if a company is going to be successful in foreign or international markets, the equation for success is complex. It includes understanding culture, currency, time zones, language, and business laws.


References

Brown, C. V., Dehayes, D. W., Hoffer, J. A., Wainright, M. E., & Perkins, W. C. (2012). Managing information technology (7th edition). Upper Saddle River: Prentice Hall.

Cross, Barry, & Bonin, Jason. (2010/12). How to manage risk in a global supply chain. Retrieved from http://iveybusinessjournal.com/topics/strategy/how-to-manage-risk-in-a-global-supply-chain#.VINUJfnF-6U

Hardman, Ron. (2005/9).http://www.oracle.com/technetwork/articles/grid/o55global-098881.html

 

 

Googlization

12/13/2014 3:23:07 PM by Eddie Jackson  email me

In 2003 John Battelle and Alex Salkever introduced the term “Googlization” to describe the phenomenon of ubiquitous information sharing via the web. This pervasive sharing of information has led to the rise of numerous ethical and security issues. Research the concept of information sharing and Googlization.

Find at least one article in the library that addresses an issue related to ethics or security and information sharing. Briefly summarize the article and explain why you agree or disagree with the author. Then read and respond to at least two of your classmates’ posts. Do you agree with your classmates’ comments? Why or why not? Remember to cite your research.

The article I read, The Googlization of Books, discusses the power of Google, whether Google is evil or not, and why the Googlization of books have failed, thus far that is. Albanese (2011) speaks on Google’s expanding power; basically, Google is taking over the planet’s information (Albanese, 2011). I’m sure it would be hard to imagine a world without the Internet, and equally as difficult one without Google. Google has moved into music, movies, personal videos, search engines, scholarly articles, and thousands of platform-based applications. It does seem like Google is taking over the world. However, for some reason, digitizing books has presented one of its greatest challenges. Google faces enormous obstacles when it comes to translating all the books of the world into e-books. For one, the task itself is monumental due to the amount of books. Secondly, Google doesn’t own the rights to perform such a translation. Thus, to assist with this task, Google went to universities. Universities were going to be a great place to start…as many have large, always changing libraries. Of course, as the author points out, the universities could not and did not receive the proper funding (Albanese, 2011). Largely, this led to the failure Googlizing books. There were also political, copyright, and total vision problems with the e-book/digital undertaking; meaning, not everyone was as enthusiastic as Google was, and thus the project failed.

After reviewing the article, it makes sense that the endeavor would fail. I mean, we are talking about taking someone’s book, magazine, or article that they would normally be paid for, and giving it away for free (or nearly free via advertising). I can understand how some people would have a problem with that. Now, does this mean Google is evil? I don’t think so (just capitalists). Many people believe that information should be free; though, with the commercialization of pretty much most of the Internet, the future of the Internet is bound to be profit driven—-not free. I did do more research on the subject of e-books versus paper books. Most of the research I found pointed to a surge in e-books, and a decrease in regular books. Although, I did find several articles that stated paper books were up, but only because of e-books. For instance, this one article from McKinney (2014) stated that e-book sales went from 0 to 3,000,000,000 between 2008 and 2013…and these digital sales are causing traditional book sales to be positively affected (McKinney, 2014). I will say this about e-books, I personally don’t think they are as good, or even an equal replacement for a paper book.

Let me tell you why. One, there is a certain pride I have in owning my paper-based books, that I’ll never have with a digital copy. Secondly, I always looked at books as a way to connect with the author; digital books do not possess that quality. There is also the point that studying (especially at the college level) is exponentially much easier using a paper-based book. I know I can’t copy and paste, and I don’t have a find button, but to me, the paper-based book can go anywhere, doesn’t require charge time or batteries, and makes it much easier to jump between summaries, reviews, and terminology sections. For those who take notes in their books (I used to in pencil), the e-books and e-reader are a decade away from what books have been doing for a hundred years. There is also another college issue…where sometimes I’ll have 4 to 5 books opened on my desk. Try that with a PDF. I guess you need 5 e-readers or laptops. I’ll get down from my soapbox now, but it really does make me wonder what will be lost if all books become digital. I haven’t seen one case study that proves people learn equally as well from digital text; I know that I do not. Perhaps future generations will learn differently, or maybe they’ll learn less effectively. What I hope to see is better technology that simulates books in every way, and then have the digital features added on top of the traditional characteristics.


References

Albanese, A. R. (2011). The Googlization of books. Publishers Weekly, 258(5), 20.

McKinney, Kelsey. (2014/6/27). Book revenues are up — but without ebooks, they’d be plummeting. Retrieved from http://www.vox.com/2014/6/27/5849354/e-books-will-save-the-publishing-industry

 

+100 more…

 

tags:  Graduate Discussions, University, MrNetTek