Glossary Items

H

  1. An open-source software framework for distributed storage and distributed processing of very large data sets. An application can be broken down into numerous small parts, called fragments or blocks, that can be run on any node in the cluster.
    An application can be broken down into numerous small parts, called fragments or blocks, that can be run on any node in the cluster. Hadoop is free and part of the Apache project, sponsored by the Apache Software Foundation.
  2. The running of Hadoop in the Cloud requires no local hardware or IT infrastructure. The service is typically elastic, allowing the adding or removal of nodes depending on user needs.
    The service is typically elastic, allowing the adding or removal of nodes depending on user needs.
  3. The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications.
    The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications. In a large cluster, thousands of servers both host directly attached storage and execute user application tasks. By distributing storage and computation across many servers, the resource can grow with demand while remaining economical at every size. HDFS stores metadata on a dedicated server, called the NameNode. Application data are stored on other servers called DataNodes. All servers are fully connected and communicate with each other using TCP-based protocols. Unlike Lustre and PVFS, the DataNodes in HDFS do not rely on data protection mechanisms such as RAID to make the data durable. Instead, like GFS, the file content is replicated on multiple DataNodes for reliability. While ensuring data durability, this strategy has the added advantage that data transfer bandwidth is multiplied, and there are more opportunities for locating computation near the needed data.
  4. Also referred to as Haptics or “touch feedback,” haptic technology applies tactile sensations to human interactions with machines. The simplest example is the actuator that vibrates a cell phone, but more advanced haptics can detect the pressure applied to a sensor, affecting the response.
    The simplest example is the actuator that vibrates a cell phone, but more advanced haptics can detect the pressure applied to a sensor, affecting the response.
  5. A hard fork (or hardfork), as it relates to blockchain technology, is a radical change to a network's protocol that makes previously invalid blocks and transactions valid, or vice-versa.
    When a Blockchain is forked, there is a permanent divergence from the previous version of that Blockchain, and nodes running previous versions will no longer be accepted by the newest version. This essentially creates two paths in the Blockchain, one path follows the new, upgraded Blockchain, and the other path continues along the old path.
  6. Head-Mounted-Display is a display device, worn on the head or as part of a helmet that has a small display optic in front of one (monocular HMD) or each eye (binocular HMD).
    A head-mounted display (HMD) is a type of computer display device or monitor that, as the name implies, is worn on the head or is built in as part of a helmet. This type of display is meant for a total immersion of the user in whatever experience the display is meant for, as it ensures that no matter where the uses head may turn, the display is positioned right in front of the users eyes.
  7. A term coined by design academics Anthony Dunne and Fiona Raby, Hertzian Space refers to the hidden electromagnetic environment generated by the increasing number of wireless devices.
    Hertzian space is a term used to describe a holistic view of the electronic device and its cultural interactions. Anthony Dunne and Fiona Raby described this "electro-climate," inhabited by humans and electronic machines, as the interface between electromagnetic waves and human experiences. In a sense, Hertzian space is a holistic view of the electronic device and its cultural interactions. It has been defined by Anthony Dunne as the architecture of the physical interactivity between a device and a person. Everything that requires electricity gives off an electro-magnetic field that extends infinitely into space. Visible light is part of Hertzian space, as well as radio, medical X-rays, television signals and UV tanning lamps. While we only see the discrete object, there is in fact an entire wave-field emanating from the object. Dunne and Raby believe that increased awareness of Hertzian space will assist our design practices. They think that we are only beginning to understand the effects and consequences of technological advances, and that "it is an environment that must be fully understood if it is to be made habitable". By thinking about technologies in terms of Hertzian Space, we gain a more holistic understanding of technology that goes beyond the merely visible technological object and encompasses the practices, economics, and ideologies that become encoded into technological artifacts.
  8. A heterogeneous network (HeNet) is a network connecting computers and other devices with different operating systems and/or protocols. HetNets allow mobile operators to better utilize their data networks’ capacity.
    A heterogeneous network is a network connecting computers and other devices with different operating systems and/or protocols. For example, local area networks (LANs) that connect Microsoft Windows and Linux based personal computers with Apple Macintosh computers are heterogeneous.
  9. High-Speed Downlink Packet Access (HSDPA) is an enhanced 3G mobile-telephony communications protocol in the High-Speed Packet Access family, also dubbed 3.5G, 3G+, or Turbo 3G, which allows networks based on Universal Mobile Telecommunications System to have higher data speeds and capacity.
    High-Speed Downlink Packet Access is an enhanced 3G mobile-telephony communications protocol in the High-Speed Packet Access (HSPA) family, also dubbed 3.5G, 3G+, or Turbo 3G, which allows networks based on Universal Mobile Telecommunications System (UMTS) to have higher data speeds and capacity. As of 2013 HSDPA deployments can support down-link speeds of up to 99.3 Mbit/s, when possible. HSPA+ offers further speed increases, providing speeds of up to 337.5 Mbit/s with Release 11 of the 3GPP standards.
  10. High Speed Packet Access (HSPA) is a wireless access technology designed for increasing the capacity of Internet connectivity from 3G mobile terminals/
    High Speed Packet Access (HSPA) is an amalgamation of two mobile telephony protocols, High Speed Downlink Packet Access (HSDPA) and High Speed Uplink Packet Access (HSUPA), that extends and improves the performance of existing 3G mobile telecommunication networks utilizing the WCDMA protocols. A further improved 3GPP standard, Evolved HSPA (also known as HSPA+), was released late in 2008 with subsequent worldwide adoption beginning in 2010. The newer standard allows bit-rates to reach as high as 337 Mbit/s in the downlink and 34 Mbit/s in the uplink. However, these speeds are rarely achieved in practice. The first HSPA specifications supported increased peak data rates of up to 14 Mbit/s in the downlink and 5.76 Mbit/s in the uplink. It also reduced latency and provided up to five times more system capacity in the downlink and up to twice as much system capacity in the uplink compared with original WCDMA protocol.
  11. An improvement made to UMTS to enable faster uploading of data from devices, increasing capacity and throughput while reducing delay. The specifications for HSUPA are included in the Universal Mobile Telecommunications System Release 6 standard published by 3GPP.
    The specifications for HSUPA are included in Universal Mobile Telecommunications System Release 6 standard published by 3GPP. The technical purpose of the Enhanced Uplink feature is to improve the performance of uplink dedicated transport channels, i.e. to increase capacity and throughput and reduce delay.
  12. Homomorphic encryption is a form of encryption that allows computations to be carried out on ciphertext, thus generating an encrypted result which, when decrypted, matches the result of operations performed on the plaintext.
    Homomorphic encryption is a method of performing calculations on encrypted information without decrypting it first. This is sometimes a desirable feature in modern communication system architectures. Homomorphic encryption would allow the chaining together of different services without exposing the data to each of those services. For example, a chain of different services from different companies could calculate 1) the tax 2) the currency exchange rate 3) shipping, on a transaction without exposing the unencrypted data to each of those services.
  13. Hotspot (Wi-Fi) is a physical location where people may access to the Internet, typically using Wi-Fi technology, via a wireless local-area network (WLAN) using a router connected to an Internet service provider.
  14. Human augmentation is used to refer to digital technologies that enhance a person’s cognitive, productivity, or physical capabilities. An example is using active control systems to create limb prosthetics with characteristics that can exceed the highest natural human performance.
  15. Another way of expressing the Quantified Self-concept, the Human Internet of Things (HIoT), refers to the collection and optimization of physiological data from sensors applied to humans, generally with wearable tech.
    The Internet of Humans refers to interactions between humans and the other two subsystems. This can involve both direct user input (e.g. by controlling a digitally connected product) or indirect human monitoring (e.g. by using wearables and a Quantified Self application).
  16. Human-computer interaction (HCI) is the study of how people interact with computers and to what extent computers are or are not developed for successful interaction with human beings.
    A significant number of major corporations and academic institutions now study HCI. Historically and with some exceptions, computer system developers have not paid much attention to computer ease-of-use. Many computer users today would argue that computer makers are still not paying enough attention to making their products "user-friendly." However, computer system developers might argue that computers are extremely complex products to design and make and that the demand for the services that computers can provide has always outdriven the demand for ease-of-use. One important HCI factor is that different users form different conceptions or mental models about their interactions and have different ways of learning and keeping knowledge and skills (different "cognitive styles" as in, for example, "left-brained" and "right-brained" people). In addition, cultural and national differences play a part. Another consideration in studying or designing HCI is that user interface technology changes rapidly, offering new interaction possibilities to which previous research findings may not apply. Finally, user preferences change as they gradually master new interfaces.
  17. A user-interface is consisting of hardware and software that lets a person send requests/commands to a machine. Typically HMIs are meant to make it as easy as possible for a person to control a machine with little difficulty.
    Typically HMIs are meant to make it as easy as possible for a person to control a machine with little difficulty. A great example here would be a smartphone. With a smartphone, a user would perform various actions in order to navigate to the phone-call application and place a call.
  18. Hyperautomation involves the application of advanced technologies, including AI and machine learning, to increasingly automate processes and augment humans. Hyperautomation extends across a range of tools that can be automated, but also refers to the sophistication of the automation
  19. Hyperscale computing is a distributed computing environment in which the volume of data and the demand for certain types of workloads can increase exponentially yet still be accommodated quickly in a cost-effective manner.
    Hyperscale computing is often associated with cloud computing and the very large data centers owned by Facebook, Google and Amazon. There is a lot of interest in hyperscale computing right now because the open source software that such organizations have developed to run their data centers is expected to trickle down to smaller organizations, helping them to become more efficient, use less power and respond quickly to their own user’s needs.
  20. An application protocol for distributed, collaborative, hypermedia information systems. Hypertext Transfer Protocol (HTTP) is the foundation of data communication for the World Wide Web.
    HTTP is the foundation of data communication for the World Wide Web. Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. HTTP is the protocol to exchange or transfer hypertext. The standards development of HTTP was coordinated by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C), culminating in the publication of a series of Requests for Comments (RFCs). The first definition of HTTP/1.1, the version of HTTP in common use, occurred in RFC 2068 in 1997, although this was obsoleted by RFC 2616 in 1999. HTTP functions as a request-response protocol in the client-server computing model. A web browser, for example, may be the client and an application running on a computer hosting a web site may be the server. The client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content, or performs other functions on behalf of the client, returns a response message to the client. The response contains completion status information about the request and may also contain requested content in its message body. A web browser is an example of a user agent (UA). Other types of user agent include the indexing software used by search providers (web crawlers), voice browsers, mobile apps, and other software that accesses, consumes, or displays web content. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites often benefit from web cache servers that deliver content on behalf of upstream servers to improve response time. Web browsers cache previously accessed web resources and reuse them when possible to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers. HTTP is an application layer protocol designed within the framework of the Internet Protocol Suite. Its definition presumes an underlying and reliable transport layer protocol, and Transmission Control Protocol (TCP) is commonly used. However HTTP can use unreliable protocols such as the User Datagram Protocol (UDP), for example in Simple Service Discovery Protocol (SSDP). HTTP resources are identified and located on the network by uniform resource locators (URLs), using the uniform resource identifier (URI) schemes http and https. URIs and hyperlinks in Hypertext Markup Language (HTML) documents form inter-linked hypertext documents.
test test