Thursday, November 17, 2011

Assignment 5 Link

My user name is amh185 and my list/virtual shelf is called Assignment 5 Alexandra Hilton. I chose documents about film festivals to build my shelf.

http://jade.exp.sis.pitt.edu:8080/cgi-bin/koha/virtualshelves/shelves.pl?viewshelf=62

Tuesday, November 8, 2011

Reading Notes for 11/11

Digital Libraries: Challenges & Influential Work
The Internet makes our job as librarians and archivists very difficult since we are responsible for searching through information that is hidden in cyberspace. Providing digital library services means that there needs to be a way for a user to sift through information in the digital environment and in order to do that some kind of order has to be made. The National Science Foundation was one of the first federal programs that supported digital library research when they funded six projects, called the Digital Libraries Initiative/DLI-1. DLI-2 came shortly after and involved many more federal organizations, such as the Library of Congress and FBI. The project kept evolving from there with more organizations and universities joining and bringing more money into researching digital libraries. The University of Illinois, for example, focused on the deployment and evaluation of journals in a digital form. They managed to provide publishers the opportunity to put their journals online. The Illinois Testbed project was used as publishers began utilizing HTML/CSS, internal linking with citations and footnotes, forward/backward links to related articles, amongst others. At the beginning of DLI-1 the prominent web browser was Mosaic 2.0 beta, Netscape Navigator wasn’t even used yet. Microsoft Windows 3.1 was the OS used most. The DLI program put into motion developing guidelines and standards for digital libraries, which have evolved themselves.
A search function was an important issue. Metasearch systems collect content within one search engine, like Google. Other search systems have broadcast search approaches. Metadata searching in comparison to full-text searching is an issue between the two search systems. The two could work together if broadcast searching developed standards and made the search function easier to understand for library users.

Dewey Meets Turing: Librarians, Computer Scientists, and the Digital Libraries Initiative
The National Science Foundation began DLI in 1994. It was at first that digital libraries targeted librarians, computer scientists, and publishers, but that eventually grew beyond those three, especially when the Google search engine came about. Computer scientists usually are library fans, so a library’s function is easy for them to understand. Digital library projects gave them a project that combined research with helping society, plus they had to develop a totally new system. Librarians were open to this because the sciences are great supporters monetarily of libraries and knew that IT development was necessary so they could remain relevant with scholarly work. DLI seemed to be a union between CS and librarians, but the quick development of the web changed things. It made the consumer and producer line blurry and the common ground that the two had met upon. CS did not have a shift in their work really, but librarians were forced to take account of what would happen to their traditional roles. CS grew naturally with the internet and brought more to the field who were interested in the appeal of the web. There was a disruption in the library community, especially when publishers demanded a high price for digital content with their journals, many academic libraries could not afford the price and had to cancel subscriptions.
Downfalls to the partnership came in the lack of money that libraries received via DLI and thought CS didn’t realize the importance of their jobs and collections. CSers couldn’t understand why librarians wanted metadata. Information must still be organized and presented, which is a librarians duty. Now there calls for a partnership between librarians and scholarly authors.

When trying to access the third article I got a “404 error” page…which is comical because the end of the second article was a little anecdote on how much that annoys librarians. I’m guessing this wasn’t to be a joke, so I went ahead and googled it.

Institutional Repositories: Essential Infrastructure for Scholarship in the Digital Age
In an academic archive it is giving the university community the ability to access digital materials created by the institution and related members. It is to preserve the university’s history. Librarians, IT professionals, archivists, records managers, faculty, and university administrators are the people that must work together to create this online repository. As technology changes access must continue and change with the technology. An institutional repository should have both faculty and student work, plus records of activities and events that went on at the university. It should acknowledge the development of the university through time and be available for others to see.
Scholarly community and scholarship are changing. Early members realized the opportunity of the internet to share ideas, whether in a scholarly journal or not. Some faculty members have looked to the internet to disseminate their works and provide their articles to a larger audience. If they are not involved with the scholarly world then they are responsible for looking over the content and making sure access remains. Metadata needs to be watched over, as well. This is a difficult task as faculty are not used to maintaining their records, only creating them. It makes these materials easy to get lost. Another issue is in preserving the scholarly record within the scholarship realm, which many faculty members are not familiar with doing. Scholarship has become more of datasets and analysis tools.
Institutional repositories have other duties such as developing a new collection strategy and put materials there that might be useful to research libraries. They can facilitate access to traditional scholarly work over the internet and have an easier system implemented for submitting materials.
Some dangers over institutional repositories include deciding what is intellectual work, not overloading the systems, and making sure the institutions they are at will commit to their importance. Overtime they can easily fail if they run out of money, management declines, or technical problems arise. Their mission is to preserve institution materials, provide reference services to these materials, and manage the rights to the digital content.

Sunday, November 6, 2011

Muddiest Point from 11/3

My question is pretty basic and something I've just found myself wondering about, but what is a mark up language exactly? Why is it called that?

Wednesday, November 2, 2011

Cite U Like Assignment 4

Alexandra Hilton's CiteULike Library
http://www.citeulike.org/user/ahilton88



References
[1] Noa Aharony. Twitter use in libraries: An exploratory analysis.
Journal of Web Librarianship, 4(4):333{350, 2010.
[2] Robert Allen and Catherine Hall. Automated processing of digitized
historical newspapers beyond the article level: Sections and regular
features. In Gobinda Chowdhury, Chris Koo, and Jane Hunter, editors,
The Role of Digital Libraries in a Time of Global Change, volume
6102, chapter 11, pages 91{101. Springer Berlin Heidelberg, Berlin,
Heidelberg, 2010.
[3] Robert B. Allen and Robert Sieczkiewicz. How historians use historical
newspapers. Proc. Am. Soc. Info. Sci. Tech., 47(1):1{4, 2010.
[4] Alton Chua, Dion Goh, and Chei Lee. The prevalence and use of web
2.0 in libraries. pages 22{30. 2008.
[5] Deborah S. Chung. Interactive features of online newspapers:
Identifying patterns and predicting use of engaged readers. Journal of
Computer-Mediated Communication, 13(3):658{679, 2008.
[6] Laura B. Cohen. Library 2.0 initiatives in academic libraries.
Association of College and Research Libraries, 2007.
[7] Colleen Cuddy, Jamie Graham, and Emily G. Morton-Owens.
Implementing twitter in a health sciences library. Medical reference
services quarterly, 29(4):320, 2010.
[8] Michael Day. Preserving the fabric of our lives: A survey of web
preservation initiatives. pages 461{472. 2003.
[9] Dana M. DeFebbo, Leigh Mihlrad, and Marcy A. Strong.
Microblogging for medical libraries and librarians. Journal of
Electronic Resources in Medical Libraries, 6(3):211{223, 2009.
[10] Joanna C. Dunlap and Patrick R. Lowenthal. Tweeting the night
away: Using twitter to enhance social presence. Journal of Information
Systems Education, 20(2):129{135, 2009.
[11] Erin Fields. A unique twitter use for reference services. Library Hi
Tech News, 27(6/7):14{15, 2010.
[12] Alexandra Goho. News that's t to print { and preserve. Science
News, 165:24, January 2004.
[13] P. Gragg and C. L. Sellers. Twitter. LAW LIBRARY JOURNAL,
102(2):325{330, 2010.
[14] N. S. Harinarayana and N. Vasantha Raju. Web 2.0 features in
university library web sites. The Electronic Library, 28(1):69{88, 2010.
[15] Kathy Ludwig and Bryan Johnson. Preserving newspaper: When and
how to, March 1997.
[16] Graham Matthews. Do we want to keep our newspapers? New Library
World, 105(3/4):157{159, January 2004.
[17] T Mills. Preserving yesterday's news for today's historian: A brief
history of newspaper preservation, bibliography, and indexing. The
Journal of Library History (1974-1987), 16(3):463{487, 1981.
[18] S. Milstein. Twitter for libraries (and librarians). ONLINE,
33(2):34{35, 2009.
[19] Mark Shelton. Do we want to keep our newspapers? Collection
Building, 23(2):102, 2004.
[20] Carl S. Stepp. How to save america's newspapers. American
Journalism Review, 15(3):18, April 1993.

Sunday, October 30, 2011

Notes Week 10


What Is XML?
·         Subset of the Standard Generalized Markup Language (SGML), easy to interchange structured docs over internet
·         Defines how Internet Uniform Resource Locators can be used to identify component parts of XML data streams
·         Document Type Definition: role of each element of text in a formal model, not required in XML
·         XML lets users bring mult files together to form compound docs, where to put pictures in text, give processing control info to supporting programs, add editorial comments
·         Composed of a series of entities, each one contains 1+ logical elements, each elements has certain attributes to describe how to be processed
·         To define tag sets use DTD
·         Some elements are placeholders, empty elements. No end-tag. Usually for graphic.
·         Important=unique identifier, cross reference between two points in doc
·         Text entity, commonly used text within DTD
·         XML file normally has three types of markup, first two option: processing instruction, document type declaration, document instance

Survey of XML standards
·         Builds on Unicode & DTD
·         XML 1.1 first revision, revise treatment of characters in the XML specification to make it adapt more naturally to changes in the Unicode specification & normalization of characters
·         Based on Standard Generalized Markup Language
·         XML Catalogs has instructions how XML processor resolves XML entity identifiers into actual documents. System identifiers given by URIs. Public Identifiers.
·         Namespaces in XML universal naming of elements and attributes in XML docs. Assign vocab markers if want to embed XHTML.
·         XML Base associating XML elements with URIs specify how relative URIs are resolved in relevant XML processing actions
·         Canonical XML Version 1.0 standard method for generating physical rep of an XML document, called canonical form. Accounts for variations allowed in XML syntax without changing meaning.
·         XML Path Language syntax/data model for addressing parts of an XML document, a little language
·         XPointer Framework defines a language to refer to fragments of an XML doc
·         XLink generic framework for expressing links in XML docs. Harder than in HTML.
·         Relax NG XML schema language, define and limit XML vocabs. Original is DTD, but some people dislike it while Relax is more simple/expressive.
·         W3C XML Schema another schema language for XML. First part constrains structure of doc, second constrains contents of simple elements and attributes.
·         Schematron schema language register collection of rules against which the XML doc is to be checked rather than mapping out entire tree structure

Extending Your Markup
·         Looks like HTML docs, starts with a prolog, ends with exactly one element
·         Single element can be viewed as root of doc, build off from there
·         DTD declared in XML doc’s prolog with !DOCTYPE tag
·         Elements nonterminal or terminal. Nonterminal contain subelements, grouped as sequences or choices. Terminal elements as parsed character data or EMPTY. Elements declared as ANY.
·         Elements can have zero or more attributes, declared using !ATTLIST tag
·         Character data most common data type for attributes. Types id, idref, idrefs
·         Namespaces avoids names clashes, can be defined in any element. Define all namespaces within the root element and use unique prefixes. Namespaces and DTDs don’t work well together.
·         Xlink describes how 2 docs can be linked
·         XPointer enables addressing individual parts of an XML doc
·         XPath used by XPointer to describe location paths
·         Location path has location steps
·         XLink to link docs together, uses its own namespace
·         XSL is two languages: transformation language (XSLT) and formatting language.
·         XSLT can transform XML  into HTML, bypass formatting language
·         XML is family of lanugages

W3Schools XML
·         Greatest strength of XML Schemas is support for data types, they use XML Schemas so don’t have to learn new language, secures data communication
·         Well-formed XML doc is a doc that conforms to the XML syntax rules
·         Complex types, simple types
·         <schema> element root
·         Simple element contains only text, can’t have attributes

Muddiest Point Week 9

Could you please explain again how to link CSS style sheets together so that they apply to a whole web page?

Monday, October 24, 2011

CSS Readings

On the making of a most basic CSS webpage...
1. Use either NotePad, TextEdit, KEdit. No word processors. The first line of HTML file tells the browser what type of HTML it is. < and > tags tell browser where to put text of document.
2. To add color: Start with a style sheet embedded inside the HTML file, <style> element. Style sheets in CSS are made up of rules: a. selector tells browser which part of the doc is affected by the rule b. property specifies what aspect of the layout is being set and c. valu is the value of the style property.
3. Then add fonts, make sure you put in alt fonts in case someone is using an old school web browser.
4. Add a navigation bar with 'padding' or 'position'
5. Style the Links, <a> element for hyperlinks
6. Adding a horizontal line to separate the text that is significant at the bottom.
7. Put style sheet in separate file so all pages can then point to it and apply same style

Chapter 2 CSS
A rule is a statement about 1 stylistic aspect of 1 or more elements. A style sheet is a set of 1 or more rules that apply to an HTML document.
A rule has: 1. selector: link between the HTML document and style. Specifies which elements are affected by the declaration. 2. declaration: apart of the rule that sets forth what the effect will be. made of 2 parts: the property, quality or characteristic that something possesses and value, the precise specification of the property.
Gluing combines the style sheet and HTML document by:
1. applying basic, document wide style sheet for the document by using the style element
2. applying the style sheet to an individual element using the style attribute
3. link an external style sheet to document using the link element
4. import a style sheet using CSS @import notation
Must use a CSS enhanced browser to display web page
Properties inherit from parent unless it is the background

Sunday, October 23, 2011

Sunday, October 16, 2011

HTML

As part of my undergrad education I had to take a class on web design and learned the basics of HTML, although we didn't really use HTML. We mostly used CSS in Dreamweaver. HTML is interesting. You have to be very exact because one little misplacement of a / and your web page is not going to look like you want it. Likewise, it can be frustrating when you cannot get the exact code as you want it. It is interesting to look at the HTML of a particular web page because it is quite complex. I liked the wired.com cheat sheet. It definitely seems like a handy bit of info of all the typical coding. Luckily in my classes we didn't have to worry too much about writing HTML because I think it would have driven me crazy otherwise. I am not tedious enough to do so.

Notes on the article:
Content management systems collect, manage and publish content. Anyone can make a guide. These guides allow people who want to contribute content to a web page to do so without having to know HTML. With CMS content can be a variety of things, such as resource links, images or PDFs. Once the submitted object is in the database it can be used over and over again. Although some websites decide to use CMS to censor what is being submitted, they also just make it easier to add content to websites. Content can be customized so it lets people be creative, as well. In this article FrontPage was the web development software used, mostly because it was free. After the CMS was developed it was tested out and eventually switched over. The system was proven to be beneficial for managing their research guides. The appearance of the guides stayed consistent. The CMS model has not been something that all libraries have taken on yet, but if they do they should take their time transitioning.

Muddiest Point Week 7

I do not have any muddiest points at this time.

Monday, October 10, 2011

Notes Week 7

How Internet Infrastructure Works
The internet is the formation of all the networks in the world connecting together. In 1964 there were only four host computer systems, not there are an insurmountable amount. The Internet Society is in charge of overseeing the policies and protocols of the internet. A Point of Presence (POP) is how local users use a company's network. There is no controlling network, just more high level networks that control things via Network Access Points. A router is used to communicate one computer with another. An IP address identifies your computer. Its made of octets with a Net section and a Host/Node section. The Net section identifies the computer's network it belongs to and the Host identifies the actual computer. A Domain Name System maps text names to IP addresses automatically so that a user doesn't have to manually connect to a network. Machines that provide services to other machines are the servers, the machines that connect to the servers are the clients. Protocols tell the computer how the client and server will communicate with each other.

Google
Google has over 100 projects. Many of these we have used already, such as the Desktop. Desktop is like a better version of the toolbar for your whole computer. The Google Foundation and Google Grants both work to "make the world a better place" by supporting numerous charities. Google Answers is a service that allows a user to ask researchers any question and have them research it for you. Adsense is a program that makes it so featured ads will be relevant to the individual user's interests. Orkut is a social network Google invented that is slowly gaining users. Google believes that they should make money from ads, not by having people pay for their search results.

Dismantling Library Systems-Pace
Integrated Library Systems (ILS) have to work hard to remain relevant and keep up with evolving technologies. They either must start over or retool to add new software to their system. It is highly discouraged to start over because it can make for a lot more work than is necessary and ruin what they had before. Pace says that ILS can either "continue to maintain large systems that use proprietary methods of interoperability and promise tight integration of services for their customers or choose to dismantle their modules in such a way that librarians can reintegrate their systems through web services and standards."

Friday, October 7, 2011

Muddiest Point Week 6

Can you please provide examples for the different network set ups (bus, star, ring) and what kind of institution would employ them? Which one is the most popular and/or efficient? Also, with peer-to-peer, why is it that sometimes on the wireless network connections on my PC there will be other people's computers on the list, but you cannot connect to them?

Monday, October 3, 2011

Notes on Computer and Wireless Networks

On the Local Area Network (LAN) wiki--
LAN is a computer network that interconnects computers in limited area, such as ones that are in a home or schools. Because they are in a smaller area they have a faster data transfer rate and do not require leased telecommunication lines. Typically Ethernet and Wi-Fi are the two options for building LANs. Ethernet was developed at Xerox PARC in 1973-75. Coaxial cable has traditionally been used for cabling to form LANs, but now that Wi Fi has become popular they are not necessary. LANs will forever remind me of my dorky high school friends who used to have 'LAN parties' to play video games against each other.

Computer Networks wiki--
Computer networks are a collection of hardware components and computers interconnected by communications channels that make it so that resources and information can be shared. Communications protocol are utilized for sharing information, such as Ethernet, IPS. These computer networks are what make is possible for us to email, share documents, or connect to a shared server. However, they also help viruses to spread, interfere with other technologies, and be difficult to set up.

Management in RFID in Libraries, Coyle--
RFIDs are like bar codes, but read with an electromagnetic field. RFID stands for radio frequency identifier. They can carry more complex tasks then a bar code. Some familiar usages are in cars for auto tolls, card keys or to track animals. They can be used in a library for security measures as they can keep track of when an item is checked in or out. When a person checks out of the library they can check out a stack of books at once instead of scanning each bar code. RFID tags can save time processing and money since it has all the information on the chip. They make it easier and more cost-effective to inventory and to do so more regularly.

Saturday, October 1, 2011

Monday, September 26, 2011

I guess I have a lot of questions about databases because this was a concept a little more difficult for me to grasp. One of my questions would be if you could clarify again the difference between primary key and foreign key? Also, if possible, for the popular/most used databases, show us some online examples of these so that I can visually portray them in my mind when thinking of the differences between database types? Thanks!

Tuesday, September 20, 2011

Assignment 1 Link

The link to my Flickr account is:
http://www.flickr.com/photos/67717223@N03/sets/72157627585225167/
Enjoy!

Databases

The information on databases that we had to read for this week was..a lot. While I would consider myself to be slightly ahead on the IT learning curve of average individuals my brain still gets tangled when delving into the ins and outs of different databases and how the actual data gets stored. Luckily, I still recognize their extreme importance to society, especially in this digital age, but it is so surprising to think about the amount of times we access a database without acknowledging that it is one.
Which brings me to the Internet Movie Database, most likely one of my favorite websites of all time. IMDb is an external database, meaning that it has "data collected for use amongst multiple organizations" (Wiki article). According to IMDb's Wikipedia entry, it started out as a hobby of Col Needham (IMDb CEO) in early 1989. He, along with other film enthusiasts, kept lists pertaining to movies, such as one on the actresses who had the most beautiful eyes. This group would go to a Usenet group and post their lists there via a bulletin board called rec.art.movies. However, it wasn't until October 17, 1990, when Needham wrote a series of Unix shell scripts that made the lists searchable, and therefore, a data base.
Obviously the website's interface has evolved over time. Now a visitor to the site is able to access virtually any movies, television shows, actors, production crew personnel, video games or fictional characters that have ever been featured in entertainment. Heck, even my sophomore year screenplay writing professor has his own page for that one Emmy he won in 1989. A link to any particular page will have subsections to address topics such as a film's plot synopsis or a character's most memorable quotes. My personal favorite has always been the trivia section, where there is always a surprise waiting.
Even though the underworkings of databases seem tedious, it makes their infrastructure much more fascinating when you look at some of your favorite web destinations. Taking a closer look at IMDb has even inspired me to try and reread exactly what goes into such a construction.

Monday, September 19, 2011

Muddiest Point Week 3

My muddiest point doesn't have to do with as much clarification but of a suggestion as to how exhibit the differences between compression types. For instance, when I think of a vector graphic I think of Adobe Illustrator. Using this program makes it much more easier to understand how vector graphics retain their shape as the image's size is enlarged or reduced. I know that it was easier for me to understand the difference between an image dealing with vectors vesus a pixelated image by going into the actual program and showing how the vector graphic remained the same despite its size. I understand the stipulations behind this strategy, but in my media arts background I had an easier time understanding the differences between image types when I could see them.

Wednesday, September 14, 2011

Week 3 Readings: Compression Makes Entertainment

Our weekly readings for Week 3 were to support our upcoming lecture on multimedia representation and storage from the point of the documents compression to how those documents benefit the public.
These readings made me think about my film work in high school and college. When the movie was done we would compress the film in its entirety so it was easier to share. The art of compression was always a little tricky, as the rate of its compression and the desired quality of the output had to be factored depending on how the movie was going to be used. Typically we would compress it several ways: one that would be small enough to be shown on the web, one that would be shown on a large screen, one that could be emailed, etc. Usually the movie would be exported into a quicktime format, a much, much smaller file than the working file. The file that was being worked with off of the program held all the footage, titles, audio, sound effects and music in its raw form, making it impossibility large to burn to a DVD or rely on for universal viewing.
Needless to say, I have a pretty good understanding of why the compression process is important, even if I only have an inkling of how its actually done. After reading the articles on data compression I can't really say whether or not I understand it much better, as the in depth look pretty much went over my head, although the fact that data compression is integral to sharing multimedia on the web.
Without compression it would be unfeasible to think that we could put all those photographs (like the Imaging Pittsburgh project) or millions of videos (YouTube) on the internet. Thanks to my undergrad education I have seen compression's usefulness first hand. I first realized how small images/videos have to be on a web site when I took a web design course in college. It was kinda annoying at first that I had to resize all my images and learn the concept of HTML and CSS to design a web page that wouldn't take all day to load just because I didn't take the time to accommodate. Luckily for the everyday person, there are programs such as Flickr and Youtube or templated designs for websites that take the work out of going through each file. You can load a video file onto YouTube and it will adjust the quality of the video so that the viewer can watch the movie in a timely fashion. If you take a photo with your smartphone it will adjust the size of the image so that it will send as a MMS. Its really neat and makes life so much easier.
In summary, surfing the internet would be pretty boring if data compression did not exist.

Saturday, September 10, 2011

Muddiest Point Week 2


I suppose I could come up with a multitude of inquiries about our lecture on computer hardware and software, mostly because it is almost too complex to imagine how it all works together so well. We all have some sort of familiarity with these parts, although I don’t think many people, apart from those who specialize in the subject area, are totally sure what is their exact role.
One question in particular that came up during the lecture was whether the memory cache on the motherboard has the same purpose of the cache that is used for web pages. During college we talked somewhat about the cache function, but I was curious about the differences and similarities between the two.
Another question I had was whether we could get an example of the type of computer that a typical consumer would purchase today, in terms of the CPU, OS, RAM, etc.

Wednesday, September 7, 2011

Week 2 Readings: Existensial Crisis Over Computers Ruling The World


The article on Google books, Digitization and the European libraries addressed a subject that has long been of interest to me. As a undergrad I was a Media Arts & Design major. Half of the time I felt like I was majoring in the internet. One of my required courses was on media law. Interestingly enough, this was probably one of my favorite courses, albeit the least creative. In this class we spent a lot of time discussing how Google is scanning books upon books in order to make them easily accessible to the public. Admittedly I haven’t given much thought to the subject since completing that course over three years ago and now, so I am sure the project has undergone many changes along the way. Our primary concern when discussing it was how they were getting around that pesky copyright issue. Is it possible to just scan an unimaginable amount of books, put them on the internet, and not have to pay enormous fees to do so? Copyright privileges don’t come cheap, especially for the amount of books Google has used.
Additionally, I just don’t get how the leading nations of Europe are having troubles financing such a large project while Google seemingly has had none. I agree with the one leader that it is concerning that Google’s distribution of materials would facilitate the interpretation of European literature, politics and history. I would be a little alarmed too if the heritage of my country was translated into Americanese before I had a chance to do so first. Yes, it is great to have multiple viewpoints involved, but I do feel bad for the Europeans for falling behind.
Preserving history and literature is incredibly, undeniably important, but I can’t shake the weird, creepy feeling I get when I think about a world where a book, made out of good ole’ paper, does not exist. If I think about this too much I might give myself an existential crisis, but I’m not totally down with Google, even if I religiously use their search engine/email/maps/cellular telephone/browser, etc. I just don’t want to wake up one day and realize that Google is turning us all into robots that are shaped like the one on my Android.
The articles on computer hardware and software had me reminiscing back to my glory days as a five-year-old. My father is a sales engineer for a company that sells computer parts to some other company who assembles them or something like that. In any case, he was always on the up and coming trends, and I’ll never forget the joy I felt when I played the bird racing game on our 1988 Macintosh desktop computer. The evolution of how computers have gone from that black and white, no internet connection, boxy looking contraption to this vibrant, RAM infused beauty I have on my lap is astonishing. As another part of my media background I had a brief history course on computers and I have seen the exhibit on them in the Air & Space Museum. Is there ever going to be a time where we just cannot improve on them anymore? Reading the articles now, especially on software, you realize how much you take for granted receiving a laptop installed with the basic necessities. If you need anything extra, you just pop over to Staples or download it from the internet. It has become so convenient and user friendly to connect with others that I almost get annoyed by the fact that there is still not a universal wi-fi system. Oh well, I’ll keep dreaming.