GIS day (Late Post)

The post is late but it is about the session that took place on GIS day.

In the session we were learning how to overlap existing maps onto a satellite maps. The map used for the first session was Soviet maps of the UAE  that was dated to be created around the 70s.

The program used is called Georef. Georef has the ability to create layers. The initial layer should be an optional template that indicates the present or the area that the old map will be compared to.

In our session we found that the current shape of Dubai/Sharjah is different to it’s previous shape with larger lakes and spaces with water to be covering land.

Another observation is how the map has been oriented to show most of the sea i comparison with the land. This indicates that the author of the map is mostly interested in the sea rather than the land. In the session we also had an opportunity to find an old map stored in the archive. The map is part of a brochure that indicates the location of its branch. I immediately recognised that the map dates pre-union when Abu Dhabi was a sole state. The indication is in the flag. The flag was the Abu Dhabi’s state flag pre-union and so the map shows a rough sketch of the city before the formation of the United Arab Emirates.

The map did not have any clear defining features such as roads except for  small section of Abu Dhabi.  There were no icons but for the main destination which is the bank.  The rest of what maybe identifiable spaces were listed as numbers and defined at the side.

The map shows who the audience for the map is when it is situated to be direct from the airport to the bank. Moreover the language is only in English. The map offers direction to other spaces that might be of importance instead of identifiers. The hospital, race course and the grand mosque seem a diverse set of spaces that would interest a visitor or tourist. This concludes that the theory that the brochure has been distributed in the airport to be a valid one.


Palestinian Films (Finalised)

The second project  is related to the book Dreams of A cinema by Hamid Dabashi. In on of the articles written by Annemarie Jacir, it talks about the difficulty of distributing and creating Palestinian films. Due to the occupied nature of Palestine there are policies that determine the narratives that can be distributed and the funding for the creation of the films.

The Data I collected in regards to the book is simplified into the relationship between the director and the film. I created a table set that has data of the:

-The film.

-the year released.

-the directors nationality.

-the directors current residence.

-the films location.

The table set has been created with a purpose of creating relationships between film and creator in terms of location.

For this data I also included the “coordinates” of a nationality which is interesting as nationality is a concept of identity in relation to location. However the location is not a conclusive data where parameters can start and end as it also been subject of wars. Anyway this is my tableset:

palestinian films 1 – local 2

The result was a network that indicated the relationship of input to these films and the effect of migration the films. The yellow represents the film locations while the pink is the locations of the directors current residence. The blue are the Nationalities location which is not as diverse or apparent due to the accumulation of the film locations recognized as yellow spots .

I was able to attain a network that is a bit more clearer. With the lighter shaded colors representing the residence of the director and the darker shades presenting the film locations. The location that indicate a simple network between two spaces are the relationship of one film director residence and the film location. The most congested relation indicates several films that have similar networks. Tel-Aviv and New York being the most congested indicate several directors residing there.  However in the network appears my mistake as one of the film locations have been simplified as “Palestine” and “Israel” instead of a specific city or area.

My initial goal of this project to provide data that can be interpreted in various form like migrations, public policy and filmmaking or the Palestinian diaspora and redistribution of film content. However I found it difficult to analyze the data myself as for some films did not have all the information I required.

I redid the same project but with node goat in hopes I would have a more detailed result. However the way I formatted the project could not provide me with any networks result. Instead having directors as the type,I chose films. For each film, I inserted the sub-objects that is completely linked to locations.

Location of the film, Location of the director and the country that the directors nationality belongs to. Through that I managed to create a more detailed map,however places that are given as west bank in the data could not be listed because the are was not recognised by node-goat.For west bank I chose to list it as Gaza instead as it relates to the Palestinian territory. However this choice of replacing it with a different space indicates the mis-representation of information for the sake of visualisation.

The first image indicates a zoomed image of the space indicates that it ism mostly blue which represents the film location. In comparison to the red and pink which indicates the directors nationalities and residence. My goal was to visualise the Diaspora nature of the film makers due to the policies of the space and the nature of their work. My project aims to question the reason for the diaspora as whether it was an affect of the film. Another question would be how the films are distributed and released. Is there a way to track the movement of the films itself in terms of distribution and censorship.

My project was really a failure as I chose the wrong information to visualise. Instead of looking at the distribution nature of the film and the source of funding I looked at the directors exact location. It did show the diaspora but does not give any more information than that.

Moreover I wonder if there is an element of violating private information even though they were found public. The nature of their movements could be private and have unique factors that cannot be generalised according to film and public policies.

The author that is I who tries to visualise networks and locations seemed to have thought more about the visualisation. I assumed that the presentation would give me an analysis and questions instead of carefully looking at the data and checking the compatibility of the visualisation with the information.

3D Project

3D project

For my final project I decided to create a 3D model. The 3D model has two versions; now is created purely out of memory and the second is an attempt to replicate photographs of the model. The model I chose is the famous volcano fountain which is part of my childhood memory. I decided to create a 3d Model project as I was nostalgic of the space but also wanted to test the ability to create an accurate replica in 3D space. It is very difficult to fully mediate an image of memory onto something and through this project I wanted to see if I could create a 3d model out of these memories.

 History of the model

The Volcano fountain was an iconic fountain that became a trademark for the corniche and Abu Dhabi.  It was built in 1980 as an  80-foot-high fountain. It was structured like a circular pyramid that had four steps in each side that provides the access to the fountain at the top. It was nearby the Corniche Maternity hospital.

According to many news articles the volcano fountain was seen as the most common meeting place and recreational space for all Abu Dhabi residents and visitors. The volcano fountain no longer exists due to the redevelopment of the corniche. At the spot there is still remains traces of a circle in which the volcano fountain use to be.

After it was demolished in 2004 as part of a redevelopment of the Corniche, Menon was taking a walk in the area and had a flash. “Why not identify a place and bring it back?” -National 2013


For this project I required the program Sketchup that allows me to create something in a 3d space. The project seemed very simple when I started with only creating from memory. It had simple options that was not complicated. However I think it is cause I am relying on my memory it seemed simple. Through the options, it seems like I chose what I wished to remember versus to what I needed to remember to create the replica. It is also due to the fact it was my childhood memory that the model seemed different as it was a different perspective.

My idea of the base of the fountain that it would be structured like a cake. Again my perception of the volcano fountain was simplified.

It was simple to geolocate as I mentioned before there were already traces that singled the Volcanos fountain presence. The volcano of y memory was made of a multiple layers of circles. The stairs were in different positions that singled a maze due to my memory of using them haphazardly like a maze. In my memory the volcano fountain encompasses the whole artificial mountain and ended with a small pool at the top.

Imitating the Image.

Looking at Images I realised that there were many factors missing from my memory. For example the fountain itself did not exist as I thought the fountain was the whole artificial mountain as mentioned previously.

The second issue was the different perspectives of the image that could not provide me access to the whole object. However with video footage of the site I was able to see a complete version of the fountain.


The process was fun and made me think about recreation spaces in 3D spaces. If we did not have any images and had to rely on secondary sources to develop the Volcano fountain then how accurate can we be in depicting the space.

Archeology in the digital Humanities heavily rely on statistics figures and sources that are either secondary or subject to incompletion. If the data is implied to recreate the findings in the virtual space or as a 3d object then to what extent is it information instead of assumption? Moreover if the 3D object is based on the interpretation of the data by the program, then is it subject to remediation of information?

My memory of the 3D object is mediated according to the perspective and tools that enabled me to structure the piece.  By trying to replicate the images, I had to remediate the different perspectives of the 3d object to create a more definite piece.  So the two attempts to recreate the fountain both remediate information through the different means.




Food network Project



Food network Project

In the Digital Humanities class we proposed to create a map that reveals a set of information about the restaurants in Abu Dhabi. The intent of the Food network Projects is to find the context of a relationship between different restaurants either in the context of one of the categories such as Average price and location.

The Process

For this post I will be providing a detailed description of the process. To me the process was a thesis question more than the research itself.

The process can be divided into two different process which is creating the Data set and visualising the data set. During the process I was made to work around the programs of visualising the data rather than be content with using their interface directly. The Carto Builder sort of created a barrier where I could no longer directly access the data but use it indirectly.

Data Collection:

Before we could visualise the data, the first step crowd sourcing the information. Through the fulcrum app each individual can input in already defined buckets on location. The buckets are categorised as :

  • Name
  • Average price
  • Number of Tables
  • Date Established
  • Origin Food and its subcategories
  • Whether they deliver to Saadiyat or not
  • A photograph of the restaurant.
  • The geolocation.

As a collective we decided to create the categories on our definitions of the foods genre which was difficult as classifications such as cafe and “food nationality” did not have the capacity to  fully relate to the data input. For example I find stores that tend to have identical menus but classify themselves as different ethnicities and include subcategories in their own description that do not relate to their menu or space features.


Another issue was the Date established. The question could not be attained only thought he direct contact of the experienced water or owner which were not always available during he interview process. It also created more questions that if the date established was meant for all of the branches or only the location the shop.

The categories made it difficult to define the  answer. What was easy to define were numerical data such as delivery time and number of tables. This data can be defined as discrete quantitative data (1). Questions that were not theory based and are related to the present status of the restaurant were more successful in comparison to the other questions.

I enjoyed this process the most as it had gave me a direct interaction with the source of the data. Instead of relying on a second source of information that could have been edited I had a complete control of what information inserted to the data set. This does not save it from being incomplete or unedited as there are factors such as transportation and time limitation that edited my capability of selecting information to certain spaces along with language barriers.The parameters of information was only set on my mobility rather than the access of information through a second source.

Visualisation Process:

The second process was visualising the data which was simple in theory only. The process required for me to export the dataset from fulcrum to Carto. Carto immediately visualises the information into a map after loading the data in a tabulates format. The difficult process was trying to create different layers and be specific with my visualisation without using an option called SQL code. Basically working only with the options that Carto provides is an issue as there is a barrier that stops me from directly interpreting the information and requires me to rely on Carto’s interpretation of the same data set.

Moreover while working with Carto I realised that the information is placed as layers and requires me to reorder it. The layers indicate there is a hierarchy of information I must determine for the viewer. I am currently working on allowing the viewer to control the information rather than have myself control information for them. Carto Builder has these widgets that are provided which enables me to analyse and create relations between the different restaurants. It was a mater of expectations where I saw information being accessible not necessarily in a Raw format but in a method where the viewer can access all the information simultaneously. My expectation that the Data will be fully realised by the Carto Builder made me question my extreme faith towards an application that can do all and much more without requiring my commands. The upsetting matter that Carto had decided the options that I need versus the ability to create my own options which I assume can be managed by the SQL code.

What freaked me out the most was HTML code and the inability to drag and drop Cartos map onto my blog post like an image or video. At this stage I am required to co-ordinate different factors to display the information set by working with html codes. The invisibility of these codes make me assume that there is a universal language and the site of the map can easily become part of the word press along with several Iframes which was not the case. The wordpress does not have the capacity to hold this information automatically and required me to add something which is a widget. I was surprised that even the transfer of the map from Carto to this blog post required another level of coordination.

I intended to detail the steps of visualising the information as though we are watching this blogpost dressing itself cause it is important that we talk about these different steps. The mediation and rendition of information through different applications and interfaces are an integral part of reinterpreting the data that started with a question to the waiter.

When I see other posts that is minimal and displays only the information of the research there is the misconception that most of the effort is in collecting and analysing the data set and not the coordination and visualisation of information. The process of visualisation is important as it dictates the perception of the information.


The Information and the Analysis

My final product in the end was a simple map using the analysis tool of Carto. Through the tools I decided to think about three factors which is the number of tables, the average price, delivery times and the years established. Through these factors I wanted to correlate that the prices are directly proportional to the amount of tables in the restaurant.


(Still working on creating an Iframe for the Map)

Blog Post : Node Goat

Node goat appears to create networks based on the fact that connection has already been pre-identified. Unlike previous programs we worked with, where the connection is the result. In our previous projects, the data is inserted with the intention that the program will identify the correlations. However in node goat, or creating networks, the correlations should already be in mind.

To create a network,categories should be predetermined and it can create issues when the data does not fit within the category. The date may have to be discarded if it is not relevant to the network which highlights how selected the network is.
For the project, Egyptian films, the main importance is the film and the people that participate in the creation of the film. Through this network we identify the director and the actors that have worked on several films. For the project, it has been decided that the main actors and the director were important for the networks.
However it proved to be difficult to identify the different works as separate entities. The social visualization became messy as people and films start to mix up. There are situations where different actors and directors have social relations such as marriage and other familial relations but the network recognizes the film as entities like persons. For the project, the film has become a social being and created confusion.
Moreover there were issues about the missing visualization of time. People and films that have not correlation do not explicitly signify that there is a time difference for this reason or another. Through this project, there is a realization that it is crucial to identify what type of information that will be visualized.

The image of Sharia


The initial Corpus:


Sharia law is a legal system based on the Quran, Hadith and the Sunna; the teachings and ways of the prophet. As a legal system it has been painted as a rigid and impossible system as the law is based on religious text.  However most of the rulings come from  a practice of Ijithad that determines the cases with different levels of acceptability and unacceptability.

These terms used are :

 Fard فرض : Obligation

mustahabb مستحب : recommended

mubah مباح: neutral

makruh مكروه: discouraged/unfavored

haraam حرام: Forbidden

Using these key terms I wish to create a textual Analysis to see their frequency of appearance within the text and then see if there are other recurring terms that gives insight to the context to their frequency.

 My aim is to indicate that Sharia is not as rigid as many seem to perceive it to be. Moreover I wished to create the ability to go through texts that  can create a comparison of the different school of thoughts using a specific case.

My initial corpus was too broad in terms of using sources, and I had difficulty trying to compile the different opinions about one case. Moreover the example text I wished to start with needed to be digitised and also required a lot of time consumption I could not commit.

Current Corpus:

The current corpus still involves Sharia but instead of using the actual sources text to tackle perceptions, I used different online news articles that refer to the text.

The three articles used for the corpus are :

All three articles are attained after I googled the word Sharia Law.  Some articles are dated and some articles never even refer to the word “Sharia Law”. All articles tie their topic to a specific location. The locations are all described as muslim countries. After finding the articles I use the Voyant tools to create a textual analysis.

Defining Sharia’s role in the UAE’s legal foundation.

The first Article is based in the UAE, written by Diane Hamid a lawyer based in Dubai. The article directly addresses the legal system within the UAE and how Sharia is incorporated.

Volant gives the results in a graphic format. The most frequent words expressed by ” bubble lines” that indicate the presence in the different parts of the article.

Most frequent words in the corpus: law (18); sharia (17); uae (14); islamic (12); legal (7)


In this article, sharia as a term is used throughout but in incremental in comparison to Law that decreases as the article comes to a conclusion.


The visual presentation of the text takes away the context of the article and reveals how the relationship between the different terms differently. Law and Sharia are inversely proportional so whenever Sharia is mentioned in the article, the terminology of law is not present in the same frequency. The same can be said for the terms Legal and Islamic. The terms Sharia and Islamic seems to be parallel for the most part . The same would go for UAE and Law and legal. This indicates that the author maybe creating a dichotomy of two in relation to the legal system of the UAE.

The second article is written by Jon Boone for the guardian. The article is presented in the category labeled as Pakistan. The article is more specific  as it is about a custom, Honour Killings, that has been criminalised by the legal system in Pakistan.

by Jon Boone  (2016)

The frequent words in the article: honour (7); law (7); killings (5); new (5); passed (5), Sharia



The relatively between words indicate that Law and Honour killings are not in relation with each other. Moreover the word Sharia is not even extremely apparent within the article. Through Voyant we see that Sharia is barely mentioned in reference to the new law and the nature of the custom being punished.

The last article is a New York Time post that is about A Saudi moral enforcer facing death threats after stating his opinion on Liberal Islam.

by Ben Hubbard (2016)


The last article has no relation to Sharia Law as indicated by the world bubble above. However through he search tools, this article appeared relevant when “Sharia Law”has been typed. This indicates that maybe the meta data of the article is more relevant to the search rather than the article itself.

The article’s most frequent words as indicated above happens to be “said”. This could indicate that resources that the writer infers for this information is through opinions of people more than facts .




Blog post 3

Most of the digital Humanities projects we looked at are concerned with transcribing, translating and presenting the information. The first step of digitising the object requires the transcription of the object whether text or 3D image. The second step which is translating the information that is not foreign in the digital realm such as symbols and fonts that have been out of use or images that require the most suitable format ignored to keep demo losing the prioritised information. from it’s previous object requires a lot of editing where information is being prioritised over other within the object of interest. Then the translated information must be accessible to the user. The presentation of the object in digital format requires curation.For example when using Typewright , has already been OCR-ed but requires our intervention to ensure that the program reads the information the way it is understood. However the text we were using is an old text that had characters that had to be translated because they did not exist in the computer. Even if not intended,there would be a hierarchy of information that the author thinks the data will make it easier for the user to understand in  terms of context of the previous data they know.

Blog post 2


Transcribing texts that would have a local impact and is created by a crowd input. The project that I think would be able to generate an initiative in terms of locally in NYUAD and the region of the UAE would be transcribing poetry of the Gulf region. Poetry are marker of events in terms of the national and personal. The collection could be an important part in interpreting the narratives within the region. It will give everyone access to the collections poetry for either educational or entertainment purposes. I imagine the audience that will be the crowd –source would vary between people who are academically interested in the poetry or fans and relations of the poet. Since poetry is performed it could also be a project that begins with text and branches out to audio and video. The videos could also create another collection of different performer of the same poems allowing for analyzation in terms of Dialect and Language.

Another project I am interested in but may not be academic is collecting Emirati novels that have been circulated in forums. Through a friend I discovered there is a large audience for these novels and my assumption that some of them have been turned into books. However, it is not certain if they were published because they were plagiarized or non-plagiarized. These collections were circulated between the years of 2004-2012. To me this collection is an important reflection of time for this region. And if possible as a side project I wish to collect them.

Blog 1- Remediation

The Remediation process

The most difficult thing in Arabic is the grammar rules ;النحوُ It is the most stressful part to any student learning Arabic. It is vast a, varied and complicated. It is stressed as it is important but in the digital realm it is n longer visible or considered. During the last class, we had an opportunity to “remediate” several texts in different languages.

The remediation process is allowing the computer to read the scanned text and provide us a digital version. However in the reading process for the arabic language it kept finding difficulties in recognising the letters that vary in forms and has spaces or are not connected. Arabic can be published in various types of handwriting and these formats of handwriting are not recognisable since it has no memory of it.

The errors in the process of remediation or digitisation of arabic text makes it clear that the recognition process relies on memory.  For the process of digitisation to happen, there must be an existing dictionary. The arabic letters in a different font will not be recognised if there is no memory of it.

This process makes me wonder how Arabic as a digital format exists. Apparently the formats are either typed out by a person or as a pdf or image file. There has not been a large activity of digitising arabic text, thus the dictionary and the memory the reader has is extremely small in comparison with other languages.

Blog 0

Blog post #0

Accessibility and Data
The improvement of technology has allowed for the swiftest communication between individuals and even institutions and organisations. The technology allows entities such as libraries and cultural archives to be available for everyone.For this small post I will be referring to the data that is not intended for commercial or economic gain. The information that interest me is archival information where books, music and films that exist as physical entities and then have been digitised.
The transfer allows the user to interact with the information and allow it to be read differently. Creating a new data altogether. Rhythms of a practical song can be visualised and mapped and even form into a 3D image. Also with accessibility to all users, the information can be translated in different language and even form a language of its own.The storage and display of information as a digital form is efficient,reliable and cheaper.


An example is the digital project called the Digital Karnak. The Digital Karnak is the Karnak Temple in virtual form. It exists virtually as a space that can be exhibited across time as a geographical location.The project is created by the UCLA’S Digital Institute and funded by the National Endowment of the Humanities. The site provides a mic of archives, videos and timeline of the temple that allows the user to explore this history of the temple. Using different formats to present the information, the user “experiences” the temple.

The home page of the site contains sub-pages; Time map, Experience Karnak, Browse Archive and google earth. The Time map allows the user to visualise the growth of the temple as a location presented in google map. By adjusting the timeline, different stages of the temple will be expressed. The Experience Karnak page contains written files (pdfs) and videos which details the rituals and festivities conducted within the temple. Google earth gives the user the ability to download the time map and view the information using the Google earth application. The digital Karnak is expressed as a periodical, geographical, visual and written format. As a digital format,the information has a different relationship with the user. It takes on a more fluid form that can be seen in different perspectives rather than one.