Saturday, 7 January 2012

DITA Blog Post 2

The URL of this Blog Post 2 is  http://liudmilaestienne.blogspot.com/2012/01/dita-blog-post-2.html

Web 2.0, Web 3.0 and public libraries.

Introduction.

The technologies that underpin Web 2.0 and Web 3.0 are not new, but their potential has only recently been exploited due to the decreasing cost of devices and technologies and the ease of accessing the internet. As is typical for any technology, Web 2.0 and Web 3.0 are both empowered and limited by their users. Public libraries have a long process ahead of them in terms of efficiently exploiting these proven technologies by successfully and economically fitting them into the existing library structure in order to enhance services and performance. 

    
Practical application of Web 2.0 technologies in public libraries.

Web 2.0 with its inherent flexibility and numerous applications has enabled public libraries to widen the range and nature of services that they provide to their customers. By adopting Web 2.0 tools, libraries endeavour to customise and personalise their services to render them user-friendly (Evans, 2009, p.14). However, public libraries fall behind their academic counterparts in adopting and utilising the powerful Web 2.0 technologies. While academic libraries, driven by their innate aspiration to explore and experiment, widely exploit Web 2.0 platforms to facilitate the learning and communication process, public libraries are less willing to launch themselves into the virtual social environment. The benefits of being present in resources like Wikipedia or Facebook are unquestionable: there is an obvious promotional value for libraries since these are users’ social spaces of choice. As state-funded institutions, public libraries tend to be more prudent in allowing users to contribute to their content or to jeopardise copyrighted material, which would make public libraries vulnerable to legal liability. The Web 2.0 phenomenon is perceived as transitional and unimportant by some commentators, not worth staff’s time and effort to engage with it. But as Kelly shrewdly noted “there may be risks in doing nothing or in continuing to use current technologies. This might include losing one’s competitive edge, lost opportunity costs, loss of one’s user community to services that are willing to take risks” (Kelly, 2010, p.109). Hence the necessity emerges for public libraries to find a fine balance between being overcautious and taking a reasonable, calculated risk in order to remain a viable part of the social realm.

While some Web 2.0 tools are widely accepted and used by library professionals: RFID technology, for example, has truly modernised public libraries’ service system through the introduction of self-service machines; others, such as QR code leaflets used to promote libraries’ services, could be criticised as a waste of money (very few library users know what they are and how to use them) or an attempt to mimic marketing strategies in the retail industry. The main impetus for adopting Web 2.0 technologies is to complement and keep existing library technologies up to date, rather than supplant them. Libraries face the necessity of describing and using the growing electronic content with which the existing MARC standard was not really designed to cope (Brophy, 2007, p.188). Web services might also be utilised to complement the libraries’ specialised applications. The ability of XML to describe any type of information in order to satisfy users’ fast-changing needs makes its use in the library environment essential. By using XML applications effectively libraries may enhance the functionality and performance of their Integrated Library Systems without investing additional funds into upgrading their systems (Tennant, 2002, p.2). The use of XML does not require in-depth knowledge of technologies to be able to generate and manage structured data in human- readable text files (Banerjee, 2002, p.32). This may help libraries tackle an issue associated with the nature of modern information technologies: they become outdated or obsolete in a matter of months. 
 
One of the applications that has enabled public libraries to freely use web sources to generate up-to-date dynamic content is mashup, which as defined by Engard is “a web application that uses content from more than one source to create a single new service displayed in a single graphical interface” (Engard, (2009, p.xv). This could be anything from simple mapping of libraries’ locations to a more sophisticated mashup which would assist users with the fast retrieval of required information by filtering the content of remote resources based on specific parameters (Fitcher, 2009, p.11). Library websites have dramatically changed with the implementation of this technology; their content, which previously used highly specialised terminology, was rendered more intuitive and media versatile, making it easier to understand and navigate. However, it is to be born in mind that given the nature of free web services their content may change or disappear overnight without notice, leaving it to librarians to monitor and control the quality and appropriateness of libraries’ websites (Herzog B, 2009, p.70). Libraries could go one step further in adopting this highly productive technology by making their content mashable, but there are several issues associated with it; some issues are objective, such as the proprietary status of some materials, while others relate to librarians’ fear of becoming redundant in a new information age.


Web 3.0: unrealistic future or future reality?

The Semantic Web is an attempt to better organise the exponentially growing web content with which current technologies are unable to cope efficiently. The idea of the Semantic Web as “not a separate web but an extension of the current one” (Berners-Lee, Hendler and Lassila, 2001) lies in the ambition to bestow meaning on all the data on the web by describing it using Resource Description Framework (RDF), which is a highly-structured, pre-defined format that enables machines to read and understand information about the content of data and create relations between them with RDF Schema. This standardised description of electronic resources enables the next stage of web development, whereby the use of ontologies – conceptualisations of meaning – would enable software applications to infer new knowledge based on that description of the content. The sheer scale of work required to render the web agents “intelligent” as well as the unanimous agreement of participants required to accept and comply with the standards raise questions about its feasibility and potential success. Since the web, despite the altruistic effort of Tim Berners-Lee and his supporters, is still largely perceived by its main driving forces as a profit-generating tool with millions of potential customers worldwide, no compelling argument has yet been made for the commercial value of the Semantic Web to entirely persuade them. Many commentators also express doubt about its usability, pointing out that the field of practical application of the Semantic Web, even without explicit commercial interest, is still limited and requires further research. 

Despite many sceptical views, there are some areas where the Semantic Web may have a great impact on solving current issues and allow for future developments to take place. Libraries may benefit from the phenomenon of the Semantic Web both by actively using it and employing their expertise to contribute to its content. For example, the Semantic Web could provide an opportunity for public libraries to affirm and enhance their value at present. This could be done at several levels: firstly, the undeniable importance of public libraries as institutions makes their social impact on public life worth studying. Libraries’ hitherto scattered performance data, both quantitative and qualitative, which at the moment is available in different formats, could be rendered meaningful and coherent by describing it in RDF triples and creating ontologies, thus making it accessible to broader social, medical, educational and other applications through the Semantic Web. Secondly, librarians face the challenge of coping with ever-increasing amounts of information and they are still expected to find answers to customers’ queries promptly and effectively. Many libraries use federated or so-called platform searches in which multiple information sources are simultaneously searched in order to retrieve and compile the required information. However, the efficiency of the federated search would be optimised and rendered less ambiguous if the electronic source providers presented their data in RDF format and applied a standard to the vocabulary which is used (Byrne and Goddard, 2010). Thirdly, the attempt to make libraries’ bibliographic data a part of the Semantic Web, though not entirely successful at the moment, is still appealing and could be explored further (Yee, 2009), since the main goal would be improved technology performance rather than superseding the intellectual work of library professionals. If adequately adapted and developed, RDF format could solve the problem of the excessive complexity of bibliographic data which may hinder its interoperability.

The open movement, which comprises open access publishing, open source software and open data, has the potential to revolutionise the existing perception of the web. Currently, the web is as it was designed by people with a pre-web frame of mind and was expected to function according to the rules of the real world. However, as an evolving new reality it generated its own set of rules and proved intractable to the “real world rules” that its creators tried to impose upon it. Many commentators claim that the web is a world which should be allowed to live by its own rules, where the proprietary rights on published materials or on software would be considered to contradict the essence of what some believe to be the philosophy of the web: a free global information space, hence the open movement’s aim to boost the value of the web by eliminating the restrictions posed by proprietary interests. It is noteworthy that the term “free” in the virtual context is often associated with something unreliable, volatile and hard to control, therefore public libraries may be wary of making greater use of open source software as it is now.   

One of the initiatives of the open movement is open access publishing defined by Esposito (2004) as “accessible with only a browser and free to the user”, even though at the moment it concerns mainly research papers which are believed to be capable of making a valuable impact if they were available to the wider public free of charge. Even though public libraries represent a smaller share of the subscriptions to electronic publications than academic libraries which have an obligation to subscribe to them in order to encourage institutions’ research, they are traditionally expected to bear all costs associated with access to the materials, providing them free of charge to their users without obvious self-interest. If open access publishing is taken one step further to encompass other types of non-academic works, it could help public libraries to ease the strain on their constantly diminishing budgets. Until recently public libraries even bore all costs for the interlibrary loan service, charging users only a minimal fee for processing reservation requests. If these unavailable and therefore costly to obtain items are provided in electronic form with free access, libraries would still fulfil their mission but at a lesser cost to themselves. However, such a scenario would necessitate a paradigm shift in our conception of intellectual property and the copyright legislation that protects it, and it would also deliver the coup de grâce to the publishing industry which could have a far-reaching negative impact on society insofar as the publishing industry implicitly advocates literature and therefore literacy. 


Conclusion.

Public libraries have to fight many battles at present. They still need to realise that technology is a culture, instead of burying their heads in the sand or reluctantly bearing with it. Their cautious approach to adopting and integrating technologies, although often criticised for its slowness, can only be justified by their qualms about investing substantial amounts of money in technologies which could become obsolete by the time they are fully up and running. Public libraries are also understandably anxious about safeguarding their future existence if they embrace technologies which could ultimately usurp their traditional functions. The current challenge for libraries is to work out how to transform a crisis into an opportunity by carving out a way in which they can fruitfully co-exist with technology, rather than lying down, playing dead and waiting for it to pass.      



References
1)      Banerjee, K. (2002) ‘Improving Interlibrary Loan with XML’ in Tennant, R. (ed.) XML in Libraries. New York: Neal-Schuman Publishers.
2)      Berners-Lee, T., Hendler, J. and Lassila, O. (2001) The Semantic Web.  Available at http://kill.devc.at/system/files/scientific-american.pdf. (Accessed: 23 November 2011).
3)      Brophy, P. (ed.) (2007) The library in the twenty-first century. 2nd edn. London: Facet.
4)      Byrne, G. and Goddard, L. (2010) ‘The Strongest Link: Libraries and Linked Data’, D-Lib Magazine, 16 (11/12). Memorial University Libraries, St. John's, Newfoundland and Labrador. Available at: http://www.dlib.org/dlib/november10/byrne/11byrne.html. (Accessed: 24 November 2011).
5)      Engard, N.C. (ed.) (2009) Library Mashups. London: Facet.
6)      Esposito, J.J. (2004) ‘The Devil you don't know: the unexpected future of Open Access publishing’, First Monday, 9(8). Available at: http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1163. (Accessed: 3 January 2011).
7)      Evans, W. (2009) Building Library 3.0. Issues in creating a culture of participation. Oxford: Chandos Publishing.
8)      Fitcher, D. (2009) ‘What Is a Mashup?’ in Engard, N.C. (ed.) Library Mashups. London: Facet.
9)      Herzog, B. (2009) ‘Information in context’ in Engard, N.C. (ed.) Library Mashups. London: Facet.
10)   Kelly, B. (2010) ‘A deployment strategy for maximising the impact of institutional use of Web 2.0’ in Parkes, D. and Walton, G. (eds.) Web 2.0 and Libraries: Impacts, technologies and trends. Oxford: Chandos Publishing.
11)   Tennant, R. (ed.) (2010) XML in Libraries. New York: Neal-Schuman Publishers.
12)   Yee, M.M. (2009) ‘Can Bibliographic Data be Put Directly onto the Semantic Web?’, Information Technology & Libraries, 28 (2), pp. 55-80, Library, Information Science & Technology Abstracts with Full Text, EBSCOhost. (Accessed: 14 December 2011).


    

Sunday, 20 November 2011

Mobile information.


The provision of information by mobile technologies has become an integral part of our everyday life. The idea of compressing information and carrying it around for immediate satisfaction of one’s information needs didn’t take a long time to gain users’ appreciation. We no longer use our mobile devices just for calling and texting. It seems (according to Richard Butterworth) to be their least used function. We grew quite addicted to mobile technology with its magical power to connect us to the external world regardless of our location.  The decreasing cost of devices paired with their ever increasing functionality make them a must of our daily routine.

As with all great things in this life and with technology in particular there are some limitations that may render users frustrated and dissatisfied. Some issues such as battery life or the speed and ubiquity of the internet access might be resolved over time according to Moore’s law, which states that semi-conductive technology evolves in 18-month cycles. Most likely, the majority of us have already experienced the fast pace of technology evolution: mobile devices are most likely to become outdated within six months of purchase. Or on the more positive side of things, if one hadn’t got the means or opportunity to buy a first generation smartphone, later on they could buy more sophisticated devices that performed much more smoothly. However, the inherent problems of mobile devices such as limited screen and keyboard size are unlikely to be solved in the foreseeable future.

The main advantage of mobile devices is their context awareness which allows them to continuously provide useful information for users. Currently, GP systems provide 40m scope which is not sufficient for augmented reality, i.e. providing additional information about objects. The European satellite project Galileo will provide 1m accuracy, and commercial users could get up to a centimetre accuracy. The major problem of satellite technology is the interference, since it relies on a simultaneous signal from three or four satellites, thus clouds, mountains and buildings can still stand in the way of this multi-million dollar, miracle-of-science technology.
As opposed to humans, machines are falling behind in visual recognition. Even though there is face recognition software for PCs, it doesn’t perform dazzlingly on the phones. It does better with place recognition, which can come in handy.

Bluetooth transmits data over short distances up to 10-15 meters. This technology is limited by privacy issues rather than by its functionality. Very few users would appreciate being fed information by information fountains coming via Bluetooth; or seeing personalised billboards when passing by them with Bluetooth on. I certainly wouldn’t.

Limitations.

Limited screen size represents a challenge for website designers. They would normally create a single document which would contain all possible information. CSS will determine how to process the document in response to requests from different devices. In case of mobile device only essential information will be displayed; all fancy graphics and images will be stripped so that the document could fit within the small screen and download faster.

Keyboard size might represent an issue, especially if one is getting on. Although there are some ideas on how to tackle the issue of keyboard size, e.g. keyboards which could be folded or rolled and carried in one’s pocket could be fun albeit tricky to use, but currently people still find them not as good as they’d wish them to be. Hence, auto-correction and auto-completion are built-in, but whether anyone uses them or not is beyond the scope of this discussion.  

In the lab, we had to design an interface that would satisfy our information needs as DITA students. I joined Caroline and Brittany in their brainstorming. We came up with some functions that we thought would be useful in the mobile information context. The idea was to complement the bits we thought were missing on Moodle for more efficient and comprehensive studying. Brittany even made a fancy drawing, so you can have a look on her blog. I will limit myself to a concise list of our ideas.

We bore in mind that it would be a complementary mobile phone interface; by no means did we intend to replace Moodle. It’s too complicated.

1)    Lectures in two different formats: pdf and podcast (we loved the idea to listen again to the lecture to comprehend the material presented after a lab session or after we’ve done some reading).

2)    Exercises in two formats: pdf and visual aids, i.e. screenshots of the steps we are to take, some prompts on how we could take it further. Half of us are usually wasting time struggling to understand how to do them. We could use this time for actual learning.

3)    Events – curriculum: lecture dates and times; and relevant events, conferences, seminars related to the course. It is really tiring to click through dozens of buttons back and forth trying to find things.

4)    Newsfeed/ Discussion boards feed. We’d love to be alerted when there are new posts and we would appreciate retrieving them better than we can do now.

5)    Blog or as Brittany suggested “Ask Andy” button.

Wednesday, 9 November 2011

APIs, XML and Mashups

Last time we discussed the main features of Web 2.0 and issues associated with using it and contributing to its content. This week’s lecture shed light on how Web 2.0 is rendered practical.
Web service is a type of API. Web services are the technical infrastructure which ensures Web 2.0 usability. To provide flexible and permanently evolving content, servers send information to clients which the latter read, process and personalise for users’ information needs. They serve as an intermediary between complex services on the internet and users.
In the Web 2.0 environment, there is no need to purchase software as in good old times. Still can’t believe how much I used to pay for a piece of software. The software stored on the internet has a proprietary status.  The data sent from users’ computers is processed by the software sitting on the remote machines on a pay per usage basis. The idea of having nothing on one’s hard drive is taken even further by the concept of cloud computing which allows users to access and manage data stored on the internet. Personally, I do have some issues with it. How is the privacy/control ensured? What happens if servers are shut down, confiscated, hacked, sold etc? For instance, Google must disclose users’ personal information without notifying them if subpoenaed by the US government.
XML (standing for eXtensible Markup Language) is a set of conventions used to create a language which can communicate any data in any form between machines. It does contain tags and attributes as HTML but those are created by users in order to convey and contain the information required.
APIs (Application Programming Interfaces) provide the user-friendly interfaces that conceal the complexity of the data storage and management from users. They allow users to build their own systems. A mashup is widely used to create new user-oriented content and services by combining information from different Web services and APIs. 
At the lab we had the opportunity to create our own mashup web page.
The first task was to create a static map with several location markers on it using Google API. The process consisted of copying and pasting the adequate strings of HTML to my HTML document and putting exact geographical coordinates for the location I wanted to mark. In the second task I created a “LIKE” button by embedding in my mashup web document  a string of HTML  generated for me by Facebook web service. For the third task I used the web service provided by Twitter to create a Twitter update feed on my web page. By the end of the session I managed to produce a basic mashup page, where various external resources were combined in order to make a page’s content versatile, interactive and informative. 


Sunday, 30 October 2011

Coursework Blog Post 1

Public libraries AND/OR digital technologies?


Digital technologies have profoundly changed the internal organisation of public libraries, the format of services they offer to the public and the way they interact with users. Before the digital revolution, users used to come to libraries, whereas now libraries have to go to their users via information technologies such as the 24/7 online library services, Facebook and Twitter. Whilst the ultimate purpose of public libraries - the free provision of information to the widest possible public - remains unchanged, the way in which they fulfil that purpose has had to evolve dramatically to keep pace with the information environment in which they must now operate. The shift in focus from paper to digital information has enabled libraries to positively reshape and expand their relationship to the public and their role within the community, however this revolution has also brought enormous pressures. For example, in addition to printed collections, libraries nowadays purchase numerous and varied ranges of costly electronic subscriptions to online resources and update electronic facilities regularly, which represents a significant financial strain. This extensive expenditure is accompanied by the need to provide ever-larger study areas for the free internet service via which users access electronic reference material.

Because libraries are now spending so much on the electronic resources, they have to ensure that the public actively use them, for which two criteria need to be met: users need the technology to access the information, either from within the library or from home, and users need to be technologically literate to access the data they require.  The vast size and boundlessness of the web can restrict its usability insofar as advanced personal search and information management skills are increasingly required (Nielsen, 2006, p.xii).This means that instead of simply being repositories of books, libraries, through the efforts of librarians, now adopt a greater role in teaching essential lifelong skills to the public. Thus for truly modern libraries to operate efficiently, a new cadre of library professionals is required. Librarians must possess a stock of personal core theoretical and practical knowledge of the technology which must be continuously updated, and they are generally responsible for teaching the free IT courses which most public libraries now offer their users. The general public’s ever increasing expectations of what libraries can and should provide, and libraries’ own efforts to keep up with these expectations and technological advances, force public libraries into a potentially unending race which could ultimately pose an existential threat as local councils come under increasing financial pressure.

An example of the tremendous effect of digital technologies can be seen through the transformation of library automation systems. Current circulation systems based on relational databases have enabled libraries to develop into multifaceted entities, which can simultaneously process and perform various operations. Library databases contain a complex set of tables in which primary keys are unique identifiers of table rows and foreign keys allow relations to be created between tables. Database Management Systems enable administrators to design, maintain and modify a database. SQL is the language with specified syntax and semantics which is used by application programs or DBMS to communicate with databases. Administrators also define permission and levels of access to databases for users. For instance, librarians can insert new data in a database by creating new user accounts or item entries from their application programs, while library users’ application programs enable them to perform simple tasks such as borrowing or renewing material. In the library environment, a database is not only constantly viewed but also could be simultaneously updated by users, which may represent a challenge for the DBMS to ensure its data consistency (Connolly and Begg, 2010, p.574), without which libraries would be unable to operate smoothly.

The implementation of technologies in libraries took a cumbersome circulation routine away from specialists and enabled them to spend more time in assisting customers with their information needs. Despite the fact that a considerable part of web content has been thoroughly indexed by search engines, theoretically making it easier to retrieve required information, people nonetheless still seek assistance from library professionals. One reason for this is that technological development has engendered a gap between those who can afford the technology and people from disadvantaged backgrounds. And while those who do have access to technologies are considered to be more information literate than ever before, they in fact generally lack critical search and evaluation skills. This is partially due to the design of current search engines, which neither encourage users to develop advanced search skills or devise sophisticated queries (Morville and Rosenfeld, 2006, p.181), nor does their ranking by popularity foster users’ evaluation skills. A recent study by UCL's Centre for Information Behaviour and the Evaluation of Research (2008) describes users as ‘promiscuous,  diverse and volatile’ and their information seeking behaviour as ‘horizontal’ and ‘bouncing’ (CIBER, 2008, p.9) which compels libraries to reconsider and simplify access to their resources, as users are unlikely to spend extra time trying to locate them (CIBER, 2008, p.30). Hence, a significant number of general users still resort to librarians’ advanced knowledge of information retrieval and evaluation skills.

With the advance of information technologies, public libraries have to justify their worth as compared with other areas of the public sector. The real challenge for libraries is to market themselves effectively, which is necessitated by the fact that some commentators even question their very existence. The disappearance or downsizing of public libraries would have a detrimental effect on the community infrastructure, depriving those who need it the most of free access to information. Paradoxically, the digital technology which could render public libraries extinct has been shrewdly appropriated by them to provide an argument for their continued existence as they use it to enmesh themselves in the social fabric by supporting those who have been marginalised by the information revolution.   

References
Connolly, T.M. and Begg, C.E. (eds.) (2010) Database systems: a practical approach to design, implementation, and management. 5th ed. Boston, Mass.; London: Addison-Wesley.
Estienne, L. (2011) Liudmila Estienne. Available at:  http://www.student.city.ac.uk/~abkb636/index.html    
        (Accessed: 29 October 2011).
Morville, P. and Rosenfeld, L. (eds.) (2006) Information architecture for the World Wide Web. 3rd ed. Beijing; Farnham: O'Reilly.
Nielsen, J. (2006) Foreword. In Information architecture for the World Wide Web. 3rd ed. Beijing; Farnham: O'Reilly. pp.xi-xii.
CIBER (2008) Information behaviour of the researcher of the future. Available at: http://www.ucl.ac.uk/infostudies/research/ciber/downloads/ggexecutive.pdf (Accessed: 29 October 2011).


Sunday, 16 October 2011

Databases and SQL.

The lecture notes equipped me with initial background knowledge about databases. A database contains a set of information which is organised in a structured way, making it accessible and manageable (Tech Target Inc, 2006).
There are different approaches to how to organise information in databases. The most efficient seems to be a relational database.  A relational database is a tabular database where information is organised in tables. Each table represents information about one item. Tables are connected together by keys. The primary key is a unique attribute assigned to a specific row in a table. The foreign key in a table represents a link to another table where this particular key is the primary key. The use of keys provides flexibility in updating and managing information in specific tables.   
SQL is a command language which enables databases to be created and updated as well as information to be retrieved from them.
As it was recommended in the “Resources” section, I started practical exercise by learning the basics of SQL  at  www.sqlcourse.com . The course is very well designed for those like me who are encountering databases for the first time.  I practised creating tables, inserting data, updating tables and producing queries.
Then it was time for serious stuff!
The first seven exercises were not easy (I made a lot of mistakes in syntax), but I understood pretty fast how SQL works.  
Then the fun started. From exercise number 8 onwards, I spent at least one hour on each query!
Finally,
I came up with productive queries.
8) The query       
select company_name,year_published,title from publishers,titles where year_published>1990 and title like '%programming%' and publishers.pubid=titles.pubid;
yielded 954 rows from the database.
9) The following query 
 select name, ISBN from publishers,titles where isbn='0-0280074-8-4' and publishers.pubid=titles.pubid;
came back with  ‘Glencoe’ as the publisher’s name.
10) After sweating for another hour, I came up with a query which produced 108 rows! All I needed was the author’s name, one and only one, not 108 rows of information. Back to work. My joins were definitely wrong.
I spent another hour trying to understand how it works and I got it. Me! Conclusion: everything is possible - even the impossible.
My last and most precious query
mysql> select title,title_author.isbn,authors.au_id,author from titles, title_author, authors where title="A beginner's Guide to Basic" and titles.isbn=title_author.isbn and title_author.au_id=authors.au_id;
returned a beautiful table:
+-----------------------------+---------------+-------+--------------------+
| title                       | isbn          | au_id | author             |
+-----------------------------+---------------+-------+--------------------+
| A Beginner's Guide to Basic | 0-0307445-1-2 |  3648 | Martin, Sherry J.  |
| A Beginner's Guide to Basic | 0-0307445-1-2 |  5027 | Parker, Charles S. |
+-----------------------------+---------------+-------+--------------------+
2 rows in set (0.02 sec)
There were actually two names instead of one.
This was hard! But I loved it.
 I had such a fabulous feeling of achievement.
References:
Tech Target Inc. (2006). Available at:  http://www.whatis.com/ (accessed: 10th October 2011).

Wednesday, 5 October 2011

DITA Session 2



Things start getting clearer with the second dip into DITA. As it was known to many (I was not among those enlightened before Monday morning), the Internet and WWW are not the same thing, although the two terms are often used interchangeably.  
As it was explained during the lecture, WWW is a powerful and flexible service which operates on the Internet infrastructure.
The Internet divides computers into servers and clients. Clients send requests for information, interpret responses and display them in browsers. Server computers (much more powerful than clients) detect messages from clients, generate data which fulfil requests and send responses. At present the difference between server and client computers is fading, as the latter become more and more powerful.
Hypertext is a natural language text with linkages.
HTML is a mark up language which allows the use of hyperlinks.
Having no technical background doesn’t really help the learning process. However, it makes it more exciting!


Practical lab work.
The first task consisted of finding out the meaning of basic tags and the attributes that I can use with them.
  1. Paragraphs are marked by  <p> Paragraph </p> tag.This tag may have following attributes: <class>  with class name value, <align>  attribute may have right, left, center, justify values.
  2. Line break <br/>  is for a single line break, it is an empty tag meaning that there is no end tag. It can be used when writing the lines within limited space.
  3. Horizontal rules  <hr> are used to divide page content into sections. Attributes: class, dir, id.
  4. Tables <table>, <tr> table row, <th> table header where the element is centred and bold, <td> element is left-aligned.
          A table with two rows and two columns will look like this presented in HTML.
         <table border="1">
         <th>Month</th>
         <th> Number of students</th>
         <tr>
         </tr>
         <td> January</td>
         <td> 1228</td>
         </table>
          Attributes: align (to align content in the cell) – left, right, center, justify, char values
  1. Meta Information <meta name=””>  contains meta elements such as author, keywords, description, date last modified, copyright. These elements are used to describe an HTML document to a browser.                                                                             
  2. Unordered lists <ul> </ul> -bulleted or marked with other symbols.
  3. Ordered lists <ol type=””> <li></li> </ol>- numbered lists.
2.1 I’ve tried two HTML editors. One at home: Coffee Cup is a simple, user-friendly and well-organised editor. It was my first experience ever with HTML. After sweating for couple of hours, I managed to create an extremely basic HTML document.
The second editor I tried was EditPlus in the lab. I created three different HTML  files called first.html, index.html and myuni.html. If I understand correctly these will be three pages of my personal website. The index.html is a main file which works as a cover sheet for other HTML documents. I also managed to mutually link all three files with anchor tag.


2.2 The following task was to apply CSS to the HTML files. CSS are used to separate content and presentation. Each style sheet contains rules that determine the appearance of the elements in HTML documents (in simpler terms how it will look in the browser). Each rule has a selector, an element of an HTML document which is affected (i.e. H1, p, body) and the declaration in {color: red;}. The style sheet can be inserted into the HTML document directly with <style> tag or a link can be generated to a .css file within a <head> part of the document. Linking a style sheet to an HTML document makes it smaller and faster to download by browsers. It is to be remembered that an HTML document should remain presentable when opened without style sheets.  
I created my own style sheet which I called my myfirstcss.css
p {font-size: 20px; }
p {text-indent: 20px;}
h1 {font-size: 12px; font-weight: bold; color: ‘#ff0080; }  

body {font-family: arial; background-color:#Gray;}
I haven’t linked it to my HTML files yet.


2.3 In the last exercise I had to publish my HTML documents making them part of WWW.
This was achieved by copying all content of my session2 folder (HTML documents +images) to the City University Web Server. It happened so quickly that I am unsure how it really worked.
Reflections:
HTML is the language which defines how documents must be interpreted by a browser. Different browsers may interpret styles and mark-up elements in a slightly different way. CSS are extremely useful when applying a style to an HTML document. An HTML document is easier to write and read when its style is defined by a link to CSS. CSS may provide either consistency of style to all HTML documents of a web site or differentiate styles. It is also possible to apply specific CSS to the browser settings, instructing the browser what style to use when displaying a webpage for viewing.  This function is useful when adapting web pages for visually-impaired or colour-blind people.







Tuesday, 27 September 2011

First DITA session

We had our first lecture and practical session in Lab on DITA. 
The practical session involved creating documents in different formats, seeing how these documents can be accessed through different applications; I saw some metadata in HTML format (used for web pages) which, personally to me, looks scary at the moment. Hopefully, it will make more sense in a few weeks time.
 The first exercise consisted of creating a text document with Wordpad in ASCII format, which is the agreed encoding for alphanumerical characters. The ASCII encoding has some limitations as it is based on 7-string bytes, thus limiting it to 128 characters. It is also encodes only the US alphabet.

In the second task, after creating a document in Microsoft Word format, I tried to open it in Notepad. We got a window with lots of gibberish, reflecting how ASCII is trying to interpret the more complex encoding system of MSWord (there were some chunks of readable text though).

The third exercise consisted of creating an HTML document and inserting image in it. HTML is a language that uses alphanumerical characters and relies on ASCII and is used to mark the text up with some predetermined tags. When I viewed the HTM document in Notepad, it produced a page with lots of data, all of it readable, though not all yet interpretable and understandable. It is called metadata which stands for data about data.
Conclusion: I have learned some basic concepts of how the information is encoded, organised (file-centred or document-centred view) and interpreted by operating systems, as well as some connections between file formats. Even though, I use documents with different formats every day, I’ve never really thought of how they are created, deciphered and function.