Wednesday, June 26, 2013

My Second JavaScript

                So I have now completed my second JavaScript exercise, short, sweet, and right below:

 

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">

<html>

<head>

<meta http-equiv="Content-Type" content="text/html; charset=utf-8">

<title>Second Javascript Lesson</title>

</head>

<body>

<script>

function relister ()

{var text = document.getElementById('text').value;

var count = document.getElementById('count').value;

var zelda = document.getElementById('zelda');

while (count>0)

{zelda.innerHTML=zelda.innerHTML+"<p>"+text+"</p>";

count=count-1}}

</script>

<form>

Text: <input type="text" name="text" id="text" /><br>

Count: <input type="text" name="count" id="count" /><br>

</form>

<button type="button" onclick="relister ()">Submit</button>

<div id="zelda">

</div>

</body>

</html>

 

                The idea with this exercise was for me to learn how to grab input for the user and then be able to manipulate it with JavaScript. So I needed to let a web-surfer enter a string or phrase, and then a number separately, and then when they hit a button this would list that phrase that number of times at the bottom of the page. I actually learned a little HTML that I didn't know before this as I had never really created any sort of form or input button just using basic HTML code before. Previously I had used a graphics editing software to make custom buttons and the like.

                To be able to pull information from a form you have to define that information as a variable.  You then set that newly defined variable=document.getElementByID('destinationID').value in order to tell your script to pull the value of that variable from the input in the form. You do have to be careful to link the form's HTML id tag inside the quotes exactly to the right variable's tag inside it's quotes and parenthesis. Then create another variable and div id tag matchup where you want to write your output text. Finally you put in the while JavaScript above the loop that actually writes the new content. The while loop will keep writing the new content until the count hits zero, and every time it writes the content the count is reduced by one. Finally by adding the new writing to the current content of the div tag we can insure the lines are repeated, rather than replaced.

Monday, June 17, 2013

My First Javascript

One thing that I have learned while on this quest is that thus far I have been a terrible project planner, likely due to my relative lack of programming experience. I wasted a large portion of the time that I had for the project attempting to find the right tools for the job, and find the right APIs. Then I spent a lot of time reading about what to do and attempting to dive in the deep end of what I finally determined to be the right code. What I have come to realize is that this is really putting the cart before the horse. I was focusing on getting the website up and running in what I, at the time, thought was the most direct method possible. What I lost sight of was that the entire point of the exercise was to improve my skills, and ultimately, learn how to program.
It took me entirely too long to determine which tools to use, that stemmed from my general unfamiliarity with web programming. I did learn how to create web pages in library school; however we were taught the basics and then given a version of Adobe Dreamweaver to learn with. Though I believe I did grasp the basics of HTML and CSS, Dreamweaver is such a powerful tool that it pretty much does all the other web programming for you. Though I used PHP and Javascript through Dreamweaver, I never really learned any of it. Once I realized that most of the API code samples that were given were given in Javascript and attempted to start to learn it I had already wasted too much time.
Then feeling the time crunch, I took the code samples that I was given in the API documentation, and simply took the code lines and started plugging them into Google. The idea was to try to understand what the individual lines meant. However all this really did was reference me back to the API documentation. I didn’t understand the code well enough to know the difference between the defined function, the function arguments, or the variables and parameters that they utilized. It is right around this time that I was brought to the realization that I was not, effectively, learning code – my actual goal from the beginning.
Coding is something that you learn by doing, by learning the most basic of commands and playing with them. Learning what breaks the code and what doesn’t. I also realized that I had a few other skills to brush up on beyond even my goal of learning and implementing a Javascript based federated search engine, if the site was going to be even remotely presentable I would need to learn the new CSS3 formatting in order to make the search and results presentable. Therefore, in order to get back to the purpose – I reprioritized.
I found a very basic, but great, Javascript tutorial that I have started working through. I wrote the following, in attempting to learn some of the basics of Javascript.

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>Nich's first Javascript Lesson</title>
</head>
<body>
<p onclick="alertbox ()">This is sample text. This is sample text. This is sample text. This is sample text. This is sample text. This is sample text. This is sample text.</p>
<script>
var x=0
</script>
<script type="text/javascript">
function alertbox ()
{x=x+1
       if (x<5)
       alert ("You clicked the sample text "+x+" time(s)! ("+y+")")}
</script>

</body>
</html>

With this script I learned how to call Javascript functions with HTML, and how to build a basic function that utilizes variables and limitations. You can attach onclick="alertbox ()" to any HTML element. This means that whenever that element is clicked that function will happen. This little bit of script defined a variable:

<script>
var x=0
</script>

I then used that variable in the script alert text: ("You clicked the sample text "+x+" time(s)! ("+y+")") to ensure that x is treated as the variable, and not just another letter in the string you use the + signs and quotations marks to separate out the string and let it know how to include the variable. However to get it to actually display the correct number you need to add x=x+1 at the very beginning of the function. The last thing that I played with today is beginning to use conditional statement. By putting in if (x<5) I made it so that the alert box only pops up the first four times that you click on the text.
       I am going to prep my application materials next. I intend to submit them immediately as is only linking to this blog. This way I will continue to build the website and actually learn the Javascript at a manageable rate. I still intend to build the site, and I still intend to learn Javascript and through that become more familiar with programming itself. However by submitting my materials immediately without the site I will approximately the same chance at getting the jobs and really be able to focus on the entire point of this exercise – self-improvement.

Tuesday, June 11, 2013

Dream Job Longshot - Course Re-evaluation

So here I am, very impressed with Drupal and reading an awesome book about it and I finally realize the obvious. For the purpose of my two-page demo site, using Drupal is the equivalent of getting yourself a pool with a wave machine in order to splash water on your face. Drupal is definitely still on my to-do list. I’m making it a personal goal to learn Drupal and hopefully work on a Drupal mod, scheme, or program that will allow libraries to start utilizing Drupal the way Openscholar has helped universities. I’d like to help libraries tie their easily configurable Drupal sites into their catalogs, federated searches, and library user info in a way that will allow small local libraries or community college libraries to expand their sites beyond the common where we are/policies/search our catalog sites that they have now.
However all those big dreams are not relevant to my current goal. The two obvious first steps to my goal are going to be to download and start using a free web editing software, and to find all the free search APIs that I can, sign up for them if need be, and start poring over the code examples to figure out how I can integrate as many of them as possible into one search box interface. I will need a CSS page to help standardize, theme, and define my whole site, at least two – likely three HTML pages, and an unknown number of PHP pages to make the scripting work.
Based on the recommendation of my partner I selected Eclipse. Understanding everything the Eclipse Foundation does would be well beyond me, but to sum quickly as I understand it is that they provide free runtime writing and testing environments for all kinds of code or programming infrastructures. Though it took a separate web search to understand this, I just wanted their web development platform. This was hidden inside a larger bundle of platforms which is why it wasn't immediately apparent, but I soon found and downloaded the Eclipse IDE for Java EE Developers. To complete this I just went to the Eclipse Downloads page, made sure I had the right OS in their drop-down menu, and then selected the right version for my computer. I made sure by right-clicking the computers tab from my windows tab and matching what I saw on the screen there. You then pick a download mirror and get your download, which of course came zipped. To unzip it I went and downloaded 7zip, a free and open-source file extraction program. Then I installed Eclipse, created a new dynamic web project – and I was off and running.
However having a use-able tool to create my website was only the barest beginnings of a first step. I needed to locate free search API utilities on the internet. There are many existing sites that already exist that already do what I am doing such as metacrawler.com and dogpile.com. I found this encouraging, as the blatant existence of these sites meant that what I was attempting to do is neither illegal nor discouraged by the builders of the search APIs. It meant that I would not likely be violating any API agreements. However I still intend to review these carefully as I build up my interface.
So what did I find out there when I started exploring freely accessible search APIs? Well I have to admit I was rather surprised. Only two of three commonly used APIs for the previously mentioned federated search sites are actually free. It turns out that Yahoo! no longer offers a free search API. I will admit that the pricing is cheap in the order of dimes on the dollar per 1000 queries per type (i.e. web, image, etc.) of queries that you do per day. What Yahoo! offers now is Yahoo! Boss Search, and though it appears to be a solid and reasonably priced product; it does not fit into my overall budget (of $0).
Next I researched Google’s API, the original Google search API had been deprecated meaning that Google no longer offers it for development and points everyone towards Google Custom Search API.  After you login to Google with a Google account setting up a Google Custom Search is the easiest I've seen yet. Though this search API is obviously intended to search a specific site for you, I found easy directions  provided by Google to tweak this to whole internet searches. After having tweaked these instructions and played around with the features it is easy to get code samples and an API key. So at this point I moved on to find more accessible search APIs to integrate.
The next place I ended up at was Bing. Bing is the other large search engine that is somewhat famous in America as it is the one developed by MicrosoftBing does have a free to use basic search API, their Bing API has extensive documentation on its implementation. All I had to do to get access to this one was sign up for a Microsoft Account and sign up to use the API on their Azure Marketplace.

I found three other free APIs which I will continue to look into tomorrow. They are the Faroo Free API, the Entireweb Search API, and the Yandex API. Not all of these may prove useable, or useable all at once like I hope. To be continued.

Monday, June 10, 2013

Dream Job Longshot – 1rst Weekend Recap

Being something of a bibliophile, as I expect all librarians at heart are, instead of immediately finding and installing Drupal like a promised myself I would do. I instead went out and starting reading a book about it. There are many Drupal books out there, however the one that I ended up using was The Definitive Guide to Drupal 7. If I was going to invest the time into  learning this I wanted to make sure it would actually help me achieve my goal. The best way to start that is by learning exactly what Drupal is, and if this would help me with my target goals.
Drupal is a content management system. Drupal is written in PHP and Javascript (using Jquery). Drupal uses databases on the web servers (either, MariaDB, MySQL, or PostgreSQL). The point of using Drupal instead of developing these things yourself from the ground up is because Drupal is an open source system. It saves the web developer (me in this case) from having to figure out the programming from the ground up. It could also accelerate my learning curve. By using Drupal I can figure out which pieces I need and why, and then go look at their core programming to learn exactly how the PHP I desired works, instead of having to figure this out the hard way.
Unlike other content management systems such as Wordpress, which is very focused on blogs, Drupal is designed to be highly diverse, extensible, and scalable. It is literally designed to be able to handle all types of websites from e-merchants to (and this surprised me) the White House website.
Drupal is an application framework. This means that it is designed to be a platform for developing serious web applications. It is meant to handle multiple APIs well. Since it is an application framework it can be used as the basis for a variety of apps, from smartphone to Facebook. It can also be found in non-CMS roles such as the front end of Java-based apps or as the back end for AJAX or Flash. An example of this that I found personally interesting was OpenScholar – a Drupal based website creation and hosting program that was designed to allow Academic Institutions to host an unlimited number of Academic websites. It allows professors and students to create those websites with no knowledge of programming or HTML. This includes being able to manage their own dynamic content, publications, events, blogs, classes, themes, and even online collaborations.
Drupal supports RDF. I first came across RDF when it was mentioned as an aside in my cataloging course, and then more thoroughly in my metadata course. RDF is a very simple ‘triple’ framework, where ‘thing A/has property/value” so one common triple would be “ebook 1/has author/john smith”. RDF is a core component of the semantic web, the idea that becomes embodied with interactive API frameworks such as Drupal which can use these assigned properties to pull and interact with the data as the various API interfaces, or programs talk to each other; speaking in either SOAP or REST, the ‘languages’ that I discovered on Friday.

The final straw that convinced me that Drupal is the key that I want to use to try and unlock my dream job was the fact that one of the modules of Drupal (a Drupal module is a bit of extensible code that has already been made and released to the open source community that can add extra features or depth to your website) is an Apache Solr search function module. Solr familiarity and working knowledge was mentioned as a desirable quality on my dream job listing; and though I don’t know much about it yet – this is the first real lead I have had! My plans for tomorrow include: continue reading up on Drupal, try to design a wireframe for my demo site, and try to start determining which modules and content I wish use in my design.

Friday, June 7, 2013

Dream Job Longshot - Recap Day 1

Today I spent designing a project that I would be able to work on and then display online that would help prove my “Ability to lean new technical skills quickly; ability to meet deadlines; strong service orientation.” I hoped to do this in such a way that it would tie in many of my other missing desired skills in a neat little bundle. For my project I decided to try and build a search aggregator, otherwise known as a federated search engine for the web, by integrating their multiple APIs using RESTful xhtml and hopefully one of those “object oriented languages (Ruby, Python, PHP, etc.)” thereby tying together a very handy demo of my job suitability. Loaded with optimism I began to search the internet for how to best do this, not knowing I was in for hours of frustration. What I did not realize is that as a university student I was spoiled. I had always had the exact steps laid out for me to be able to achieve my assignments. Back then all I had to do was follow that path laid out for me and then add a creative twist in order to excel. Here I felt blocked at every turn.

I though to start by looking at Google's API. Not only does Google have the honor of being THE major search engine on the web nowadays, but I know that they post their APIs and have specific instructions on how to use them in order to encourage programmers to build with them. What I found however was not encouraging. I did quickly find a comprehensive list of APIs that Google offerred in their APIs interface. However I couldn't but help notice that a full-fledged Google Search was not among them. It also occurred to me that I had developed this plan without fully knowing what an API was.

An API stands for Application Programming Interface, and I found the following handy definition:

(Application Programming Interface) A language and message format used by an application program to communicate with the operating system or some other control program such as a database management system (DBMS) or communications protocol. APIs are implemented by writing function calls in the program, which provide the linkage to the required subroutine for execution. Thus, an API implies that a driver or program module is available in the computer to perform the operation or that software must be linked into the existing program to perform the tasks.PC Magazine Encyclopedia - API

So plugging in an API into a webpage was basically like outsourcing a specific part of the function in that webpage that you wanted to implement. Thinking back on this I realized that we had in fact learned about this in library school! I may have never had heard of the term API before, but we had talked about the the increasing functionality of XHTML over basic HTML and how the idea of having cross-program functionality would greatly increase the utility and capabilities of the web.

Turns out, this is also connected to RESTful infrastructure that was also mentioned in by my dream job. Yet once again I was discovering that I was unfamiliar with terminology, even if I was aware of concepts.

SOAP – is the acronym for Simple Object Access Protocol, which to remove the computer geekness and IT terms from the definition – is basically the universal language for computers. It is what a windows computer will send to a linux server to ask it for information and be understood. Technically SOAP refers to the tiny information packets or messages that are sent between machines in order for them to communicate. -Techterms.com – SOAP

REST – is the acronym for Representational State Transfer, is a similar universal language for computers. However unlike SOAP it is also an architecture for computer websites as well as a language. Confused? I was! Sites that implement REST are termed RESTful systems. RESTful systems have URI or Universal Resource Identifiers attached to everything. These identifiers are in practice URLs, however instead of just having them attached to each page, they are also attached to things like users, database objects, transactions, etc.. REST also has a few other building blocks: it assumes that there is a client (i.e. you on your local computer through the website designed with REST) and a server (obviously a RESTful system), each client request is individually generated with all necessary data and using the basic REST commands the server responds statelessly (meaning without storing any data regarding the request on the server), every request will be designated on the client's side as cachable (or not) which will tell the clients computer whether or not to store the results (results could be stored for faster future processing, or not to ensure that old data is not inappropriately used or submitted), and finally that there are several communication layers in REST, the client computer, client web-browser page, the server response system, and all the data within the server itself. This allows for servers to scale what the house dynamically and appropriately. -techopedia REST

I found a good article the compares REST and SOAP called "Knowing when to REST". This article was very good at describing the difference between the two systems and which would be appropriate to use when. What this boiled down to is that if the website is providing a service based activity through its website, such as a merchant or calender, then SOAP is likely the better choice as it provides solid best practice standards in reliability and security where REST does not. (REST still can be secure, but every case has to be judged individually – there is no 'best practice'.) Whereas if the site is providing a resource, such as a digital library, search engine, or typical news site, then REST makes more sense to use.

To go back to my previous thought, REST or SOAP are related to being able to use APIs in your site because they will need one or the other in order to communicate their interactive data successfully. Which to use in a given scenario is often chosen for you based on the server requirements or API requirements in a given scenario as I discovered “SOAP vs REST API Implentation” by FliquidStudios.

Going back to my original frustration it looked like the API that Google offered for free was really only designed to search specific sites and had a limited number of uses per day. Well its not like I ever expected to exceed the limited number of uses per day, but I didn't want to just search specific sites, I wanted to do a whole-internet Google Search, and that was just for starters!

Then while poking around I found some interesting leads on how to do this theoretically with RSS feeds. Now as far as I knew, RSS feeds were information streams that you could hook your email/mobile app platform/whatever into to get constant updates on whatever topic the RSS feed was designed to cover – so how could you use it to write help plug into a search engine? Admittedly the page that claimed this was a Wikipedia page on Search Aggregation, and as a trained librarian I know full well that those are not always accurate, especially the ones that claimed to have issues like this one did. However I also knew that this was due some further investigation.


While looking around for this I stumbled across the website Wopular. I was excited by this discovery because this site appears to have achieved tangentially on a large scale what I will be trying to do on a small one. I also found this neat article describing how and why the site was designed the way it was. So my goal for tomorrow: download a localhost version of Drupal in order to start playing with it, perhaps this can count as my web-oriented programming language!

Dream Job Longshot - Initial Examination and Goal

Well even though I don't seem to meet even the minimum qualifications of my dream job, I need to start by reviewing everything they are looking for. I find it likely that the creators of this job search were shooting for the moon as much as I am in attempting to apply for this job. Basically, they saw no reason not to ask for everything they thought they may possibly want – in this job market they have a good chance of getting it.

Apparently, when I discussed the job posting with my partner who has been a long time programmer for physics simulations, he saw something different in the posting. He saw that they asked for proficiency in an object-oriented language, but then named several languages that are really focused on web development, rather than applications programming. He interpreted this to mean that what they were really looking for was someone who could develop the web-side interface for other programs so that they could be used through the library's web-server, rather someone who actually created programs from the ground up.

Therefore I decided that I should write a web-page for my project. I actually did get my idea from library school where I had remembered learning about federated searching. Federated searching is the idea that you can search one term in multiple places. In a library context it is the idea of searching both the card catalog and multiple online article databases at the same time. Here I thought I would try to build a search engine that would search and retrieve results from multiple search engines at the same time.


So I have my plan and my project, now the hard part.

Brief Personal History, Short & Long-term Goals

Personal History


I am a recent graduate of the School ofCommunication and Information at Rutgers University. I completed my degree entirely online, completed a digital libraries specialization, and graduated in October 2012. I currently live in Oregon with 3 cats and my domestic partner.

Short-term Blog Goal


I have been searching for a job since I graduated and thus far have been unsuccessful getting anything, much less something in my field. However today I spotted my ideal job. Despite the fact that this job is exactly the kind of job I was aiming for with my degree I find that I still lack desired proficiencies. Therefore short-term this blog will be about my journey in attempting to acquire those proficiencies and demonstrating them – all before applying for this job in the next three weeks. The things that the job is asking for that I feel I need to gain are:

Demonstrated experience working with UNIX or LINUX server platforms, related software, and basic administration utilities.
Proficiency in web-based object oriented programming languages like PHP and/or Ruby on Rails.
Experience working with APIs, mobile technologies, and web services.
Working familiarity with REST and SOAP.
Working knowledge of DSpace and Fedora.
Knowledge of linked data and its application to library and cultural heritage projects.
Working knowledge of SQL, NoSQL databases, and/or RDF triplestores.
Experience implementing and maintaining a search index such as Solr or ElasticSearch.

Ironically the only requirements of the job that I trained throughout my graduate degree that I feel I do already meet are:

Knowledge of metadata standards and digital preservation systems, and of course actually having the required degree.

Long-term blog goal


I want this blog to serve as a useful tool to anyone who is trying to get free knowledge and service from the internet. Whether they use this knowledge to help grow a physical library into the digital realm on a manageable budget, or are simply interested for their own benefit of using the free software, databases, or other information-discovery related content that I find and describe. I hope to post once every other week describing a free resource that can be utilized by any library or individual. There will likely be additional posts of other library related content such as my initial library job-search and resume building posts.