Sunday, November 29, 2009

how search engines work

It is the search engines that finally bring your website to the notice of the prospective customers. Hence it is better to know how these search engines actually work and how they present information to the customer initiating a search. There are basically two types of search engines. The first is by robots called crawlers or spiders. Search Engines use spiders to index websites. When you submit your website pages to a search engine by completing their required submission page, the search engine spider will index your entire site. A 'spider' is an automated program that is run by the search engine system. Spider visits a web site, read the content on the actual site, the site's Meta tags and also follow the links that the site connects. The spider then returns all that information back to a central depository, where the data is indexed. It will visit each link you have on your website and index those sites as well. Some spiders will only index a certain number of pages on your site, so don't create a site with 500 pages!The spider will periodically return to the sites to check for any information that has changed. The frequency with which this happens is determined by the moderators of the search engine.A spider is almost like a book where it contains the table of contents, the actual content and the links and references for all the websites it finds during its search, and it may index up to a million pages a day.Example: Excite, Lycos, AltaVista and Google.When you ask a search engine to locate information, it is actually searching through the index which it has created and not actually searching the Web. Different search engines produce different rankings because not every search engine uses the same algorithm to search through the indices. One of the things that a search engine algorithm scans for is the frequency and location of keywords on a web page, but it can also detect artificial keyword stuffing or spamdexing. Then the algorithms analyze the way that pages link to other pages in the Web. By checking how pages link to each other, an engine can both determine what a page is about, if the keywords of the linked pages are similar to the keywords on the original page.

Wednesday, November 25, 2009

on-line novel? how can that make money??


www.qidian.com is a Chinese on-line reading site under the Shanda Group which owns another two similiar novel website www.jjwxc.net and www.hongxiu.com. Shanda Group owns the copy right of novels which have a total of 40billion Chinese characters, the total daily update of the novels is 50 million Chinese charaters. Shanda Group has 36million registered users and 30% acctually come from foreign countries. Among these three websites, qidian.com is the most successful one, the daily click to the website is 400million times. Many of the book titles became the top search keyword in baidu.com or google.com.cn. The novels are in a wide range , from science fiction to magical fiction, from kongfu novels to history novels, from love stories to war legend. The imagination of the 800,000 contracted writer are unbelievable. Many of the novels have been published by publishers from China mainland to Hongkong and Taiwan. Some are made into mmorpg games and few are being filmed.Here is a summary of some of the novels that have been made into games. http://www.5172game.com/news/yxzx/2009/0811/490.html\
(it is in Chinese).
These are two games which were originally novels from qidian.com.
You may wonder how they make money out of it, as the novel text can be easily copied and pased. First they have make the payment very easy. Shanda Group owns several mmorpg in China, to make the payment easy, gamers can easily purchased top-up card from net-cafe or IT related stores which are all over the place. Readers can also use such top-up card to top-up their account and when they log-in to the net, there is atomatic deduction of money. Each chapter is about 50 cents RMB so it is quite cheap, therefore private websites are not very attractive.
In order to improve the quality of the novels, qidian.com has a policy of monthly token. Every paid user is given some tokens every month, and if they feel that the book is good and want to support the author, they can give their token to the book and the author of the top ten books will be rewarded some cash, and the website send good authors to special training class given by the Association of Chinese Writers which is the national professional author board.Some of the very popular novels are published by publishers and Shanda either sold the copyright for some money or publish the novel with its own press.It is similar case for novels which are made into games, as Shanda it owns several mmorpgs and has several large game development teams, it can choose to do it on its own or sell it to other game makers.With the development of 3G technology, qidian.com now has the service for hand phone users and they can pay throuth their mobile service provider, eg China Mobile, and the readers can read the novel on handphone. This allow people to read novels on subway or bus. The recent news said that Shanda has signed contract with iTune to provide Chinese iPhone user with another payment method through iTune, this service may be also available to oversea users.Qidian.com has been awarded many titles such as 2008"annual best literature website",2008 "Forbes China New Media".

Sunday, November 22, 2009

facebook

Mark Zuckerberg invented Facemash on October 28, 2003 while attending Harvard as a sophomore. The site represented a Harvard University version of Hot or Not, according to the Harvard Crimson. That night, Zuckerberg was blogging about a girl who had dumped him and trying to think of something to do to get her off his mind:
I'm a little intoxicated, not gonna lie. So what if it's not even 10 p.m. and it's a Tuesday night? What? The Kirkland [dorm] facebook is open on my desktop and some of these people have pretty horrendous facebook pics. I almost want to put some of these faces next to pictures of farm animals and have people vote on which is more attractive.
—9:48 pm
Yea, it's on. I'm not exactly sure how the farm animals are going to fit into this whole thing (you can't really ever be sure with farm animals . . .), but I like the idea of comparing two people together.
—11:09 pm
Let the hacking begin.
—12:58 am
According to The Harvard Crimson, Facemash "used photos compiled from the online facebooks of nine Houses, placing two next to each other at a time and asking users to choose the 'hotter' person." To accomplish this, Zuckerberg hacked into the protected areas of Harvard's computer network and copied the houses' private dormitory ID images.
Harvard at that time did not have a student directory with photos and basic information and the initial site generated 450 visitors and 22,000 photo-views in its first four hours online.That the initial site mirrored people’s physical community -- with their real identities -- represented the key aspects of what later became Facebook.
"Perhaps Harvard will squelch it for legal reasons without realizing its value as a venture that could possibly be expanded to other schools (maybe even ones with good-looking people ... )," Zuckerberg wrote in his personal blog. "But one thing is certain, and it’s that I’m a jerk for making this site. Oh well. Someone had to do it eventually ... ". The site was quickly forwarded to several campus group list-servers but was shut down a few days later by the Harvard administration. Zuckerberg was charged by the administration with breach of security, violating copyrights and violating individual privacy and faced expulsion, but ultimately the charges were dropped.
Zuckerberg expanded on this initial project that same semester by creating a social study tool ahead of an art history final by uploading 500 Augustan images to a website, with one image per page along with a comment section. He opened the site up to his classmates and people started sharing their notes. "The professor said it had the best grades of any final he’d ever given. This was my first social hack. With Facebook, I wanted to make something that would make Harvard (and more open that) more open," Zuckerberg said in a TechCrunch interview.
The following semester, Zuckerberg began writing code for a new website in January 2004. He was inspired, he said, by an editorial in The Harvard Crimson about the Facemash incident. "It is clear that the technology needed to create a centralized Website is readily available," the paper observed. "The benefits are many." On February 4, 2004, Zuckerberg launched The Facebook, originally located at thefacebook.com. “Everyone’s been talking a lot about a universal face book within Harvard,” Zuckerberg told The Harvard Crimson. “I think it’s kind of silly that it would take the University a couple of years to get around to it. I can do it better than they can, and I can do it in a week.” "When Mark finished the site, he told a couple of friends. And then one of them suggested putting it on the Kirkland House online mailing list, which was, like, three hundred people," according to roommate Dustin Moskovitz. "And, once they did that, several dozen people joined, and then they were telling people at the other houses. By the end of the night, we were, like, actively watching the registration process. Within twenty-four hours, we had somewhere between twelve hundred and fifteen hundred registrants."
Membership was initially restricted to students of Harvard College, and within the first month, more than half the undergraduate population at Harvard was registered on the service. Eduardo Saverin (business aspects), Dustin Moskovitz (programmer), Andrew McCollum (graphic artist), and Chris Hughes soon joined Zuckerberg to help promote the website. In March 2004, Facebook expanded to Stanford, Columbia, and Yale. This expansion continued when it opened to all Ivy League and Boston area schools, and gradually most universities in Canada and the United States. Facebook incorporated in the summer of 2004 and the entrepreneur Sean Parker, who had been informally advising Zuckerberg, became the company's president. In June 2004, Facebook moved its base of operations to Palo Alto, California. The company dropped The from its name after purchasing the domain name facebook.com in 2005 for $200,000.
Facebook launched a high school version in September 2005, which Zuckerberg called the next logical step. At that time, high school networks required an invitation to join. Facebook later expanded membership eligibility to employees of several companies, including Apple Inc. and Microsoft. Facebook was then opened on September 26, 2006 to everyone of ages 13 and older with a valid e-mail address. In October 2008, Facebook announced that it was to set up its international headquarters in Dublin, Ireland.

the history of c language

C is a general-purpose computer programming language developed in 1972 by Dennis Ritchie at the Bell Telephone Laboratories for use with the Unix operating system. Although C was designed for implementing system software, it is also widely used for developing portable application software. C is one of the most popular programming languages. It is widely used on many different software platforms, and there are few computer architectures for which a C compiler does not exist. C has greatly influenced many other popular programming languages, most notably C++, which originally began as an extension to C.
Early developments
The initial development of C occurred at AT&T Bell Lab between 1969 and 1973; according to Ritchie, the most creative period occurred in 1972. It was named "C" because many of its features were derived from an earlier language called “B”, which according to Ken Thompson was a stripped-down version of the BCPL programming language.
The origin of C is closely tied to the development of the Unix operating system, originally implemented in assembly language on a PDP-7 by Ritchie and Thompson, incorporating several ideas from colleagues. Eventually they decided to port the operating system to a PDP-11. B's lack of functionality to take advantage of some of the PDP-11's features, notably byte addressability, led to the development of an early version of the C programming language.
The original PDP-11 version of the Unix system was developed in assembly language. By 1973, with the addition of struct types, the C language had become powerful enough that most of the Unix kernel was rewritten in C. This was one of the first operating system kernels implemented in a language other than assembly.
K&R C
In 1978, Brain Kernighan and Dennis Ritchie published the first edition of The C Programming Language. This book, known to C programmers as "K&R", served for many years as an informal specification of the language. The version of C that it describes is commonly referred to as K&R C. The second edition of the book covers the later ANSIC C standard.
K&R introduced several language features:
standard I/O library
long int data type
unsigned int data type
compound assignment operators of the form =op (such as =-) were changed to the form op= to remove the semantic ambiguity created by such constructs as i=-10, which had been interpreted as i =- 10 instead of the possibly intended i = -10
Even after the publication of the 1989 C standard, for many years K&R C was still considered the "lowest common denominator" to which C programmers restricted themselves when maximum portability was desired, since many older compilers were still in use, and because carefully written K&R C code can be legal Standard C as well.
In early versions of C, only functions that returned a non-integer value needed to be declared if used before the function definition; a function used without any previous declaration was assumed to return an integer, if its value was used.



by Siming

Saturday, November 14, 2009

Cloud Computing in Web Services – Next Generation Webhosting

banner

With the advent of rapidly-growing websites and expansion of services to meet the increasing business needs and clientele, webhosting has essentially transformed from being just a mere provider of a singular and definite platform for internet presence to becoming a flexible and scalable one-stop solution and service extension to both consumers and business owners world-wide.

 

What is Cloud Computing?

In simplest terms, Cloud Computing is a set of pooled computing resources such as Office Productivity Suite, Image Editors, Accounting Programs and etc. hosted and owned by a third-party service and delivered over the Internet to your PC.

Before Cloud Computing…

Sounds complicated? Haha, let’s try an analogy:

  1. Let’s imagine you’re an executive working at a large corporation. Your job scope includes ensuring that all your employees have the right hardware and software they need to do their jobs everyday.
  2. However, buying all the software and hardware for them is simply not enough: You’ll also need to purchase software licenses to give your employees the programs and stuff they require.
  3. So whenever you hire a new guy, you’ll have to buy more software or licenses for his terminal in order for him to work legally and effectively.

This involves a lot of money and often become a burden and an unproductive chore that not only adds on to your capital cost but also creates extra work in the setting up and purchasing such requirements!

How the “Cloud” works?

how-cloud-computing-worksCloud Computing offer an alternative solution. Instead of installing a whole suite of software for each and every computer, users will only have to load one application or interface which can be as simple as a web browser, logging onto a service-provider’s website and have access to all the applications remotely hosted over the network, saving up on tons of software and hardware purchase and maintenance cost.

Scalability and On-Demand computing power, storage and bandwidth

Cloud computing customers simply pay for the resources they use as they need them which is extremely flexible especially for new start-ups and expanding businesses as they do not have to manage and engineer web traffic and peak load limits.

For example, customers in retail and sales industry can choose to scale up their bandwidth, storage and computing power of their web services during festive seasons to augment their operations to meet increased business demands and scale them down in months of low-peak usage to maximize cost-savings flexibly and efficiently.

It’s not really a novelty

Cloud computing is not really a novelty. There’s a good chance that many of us are already familiar with and in fact, regular users of it. Web-based email services like Hotmail, Yahoo and Gmail are actually primitive cloud computing experiences that have become an essential part of our daily lives. Instead of running an email program on your computer, you login to a web-based email account remotely, with the software, mails and messages all stored on service’s computer cloud!

 

The emerging Cloud in the changing web industry

“Nothing endures but change and Change is the only constant that remains in the world we live in yesterday, today and tomorrow to the indefinite future.” Such is the truth too for the cyberspace and the Internet which remains one of the fastest growing and changing medium that is ever-expanding, reaching out and touching the lives of many others globally and effectively.

Cloud Computing technologies offer a viable, effective and cost-efficient solution for companies to emphasize on their core business policies and delivery to the customers rather than spending productive time on the extra, side-lined tasks of hardware and software infrastructure, saving valuable assets and resources for the greater good of the company and the consumers.

Brought to you by ~
Wong Kheng Leong =]

Thursday, November 12, 2009

Amazon, the virtual Bookstore(and more!)





After the lecture on how Amazon works under the hood, i was quite confused as to what the professor was talking about as i have no prior experience of using Amazon :P

Thus i think it would be beneficial to all of us that we have a sample look into the shopping experience of Amazon.


A first look at the homepage of Amazon.com

I have to admit, I was ashamed of my own lack of knowledge regarding Amazon.com
I use to think that it was only an online bookstore that delivers Harry Potter to your doorstep as soon as it's release. I was shocked to find so many different products on their website.
It is more worthy to be called Amazon the Supermart from now on. Just a sample of the range of products available on the website. Books(duh!), music, games, toys, electronics, sports, gardening and many more. Many of these products are non virtual and in fact bulky.
Out goes the window of my imagination of Amazon warehouse to be storing only books.


A sample product i tried clicking was this range of E-book reader called "Kindle". It is kind of like a supplementary product that Amazon is trying to sell to compliment their virtual database of e-books. Upon checking, you can see that there are options to personalize the product on the right hand side. Options like "Add a kindle leather look cover $29.90" and " Add a 2 year-warranty $65.00" are available and easily accessible using a simple checkbox.


And after you add the item to your shopping cart, a compiled list of accessories that goes along with your product will appear, ranging from book covers to e-giftcard vouchers.

It is truly amazing that the amount of time you have to spend to purchase one item that you want is less than 10 mins. All you have to do is to key in the proper title that you are looking for and get your visa card ready.





As many of you know(if not all :P) i'm an avid Mac fan and i must say i'm very please to see that Mac laptops are supported on Amazon.com
In fact what was amazing was that Amazon.com not only sells first hand Macs, it also resell second hand Macs and both prices are below the standard retail price at Apple. It also indicates how many units of laptop it has remaining in stock. Surely this will attract those people who always wanted a Mac but had not enough budget.Furthermore they deliever to your doorstep :D

With the discovery that they offer not only good prices for 1st hand products, but also GREAT prices for 2nd products, i would personally encourage all of you to take a look inside there before you purchase any expensive products. Who knows, perhaps u can buy 2 of it at Amazon.com :D

Maya-the Most Prestigious 3D Comupter Graphics Software


When talking about computer animation, one thing that we cannot avoid talking about is Maya; the high-end 3D computer graphics and 3D modeling software package developed by Alias Systems Corporation. It is perhaps the most popular and prestigious one of its kind and is widely used in films, TV, advertising, computer and video games, and architecture visualization and design as well. It is so influential that it won an Academy Award "for scientific and technical achievement" in 2003, citing use "on nearly every feature using 3-D computer-generated images”.

Maya is launched with a total of two commercial versions: Maya Complete and Maya Unlimited. Maya Complete includes most of Maya’s features whereas Maya Unlimited contains all the features in Maya. At the time when Maya Complete was first launched, it was so expensive that it deterred normal family users from buying it. But now its price is about the same as the price of other 3D computer graphics software which more people can afford. Besides these two, it still has a version for noncommercial purpose called Maya Personal Learning Edition (PLE), which is totally free of charge. But when using some of its key functions, huge notice will be prompted continuously telling you that this version is strictly for noncommercial purpose and improper usage is prohibited. Eventually the end product will come together with watermarks.

The release of Maya has dramatically reduced the cost of producing 3D animations. Before then, commercial 3D animation production was basically monopolized by Softimage which is required to operate on SGI workstations. But Maya lower the hardware requirement. It can work on PCs based on Windows NT operating system, therefore popularized the 3D animation. Because if its existence, Softimage was then modified to be used on PCs.

Maya has quite a number of plug-ins available for different specific needs. The one used in the bridge collapse animation shown during the visual computing seminar is called Blast Code. It is a high-level animation engine which helps to produce simulation of destructions and computer animated demolition. It can simulate real blasting, missiles attack, damage of constructions due to natural disasters etc. Its features are being improved during the production of films. One of these films is the Oscar winner King Kong.

Game Hacking

Notice: This is only a draft.

1) Introduction:
Games has brought us fantasy
Not only children are easily addicted to games, we can frequently see adult indulged in games at work or at home forgetting about their jobs.
However, as a computer game player, be it beginner or hardcore, you must have been through many circumstances in which you cannot advance further in the game because of some obstacle, or you feel bored because there is nothing else to do after you have cleared the game. What you wish is something that can help you pass the obstacle (although doing that will eliminate your satisfaction of completing something difficult) or something that give you a new perspective of the game you have been playing for hundred of hours. Whether it's the first or the second case, we are involving in game hacking.

In this blog post, I'll talk about 2 aspects of game hacking, and at the same time evaluate their advantages and disadvantages.

2) "Soft" hacking:
a) Concept:
"Soft" hacking involves in changing values in the memory which the game uses to process to make the game exhibits certain desired effect(s). Since only the memory on which certain components of the game are loaded is modified, "soft" hacking will not destroy the game data.
Example of "soft" hacking software for NDS and (Wii) is EmuCheat to create ActionReplay, CodeBreaker (sharable), for PC Game is ArtMoney.
b) Extent
"Soft" hacking is only possible under certain conditions
(- The targeted memory address which decides the effect)
- The data deciding the desired effect is loaded along with the core files
OR
- The data deciding the desired effect varies from instance to instance of the game. i.e. different for every save game
AND
- The data deciding the desired effect is allocated to some fixed addresses or "lazy" addresses (the address will not change during a session of the game). It will be impossible to track down an effect if its location changes too quickly.

An example of the 1st condition (rarer) is debug mode code is some games.
Example for the 2nd condition is money, hit point (HP), lives, item, etc.
Usually, the outcome of "soft" hacking can be stored in save game, if the game supports one. Save game is in fact an incomplete copy of the memory status but sufficient for the player to resume his/her progress in the game.

c) How-to
To "soft" hack, a program for viewing and searching and possibly enforcing the value in the memory is needed. Also, some basic knowledge of data representation, intuition and luck is also required.

Firstly, think of how the data can possibly be represented. Similarly to how data is stored on hard disk, the unit of storage on memory is byte.

The targeted data in most cases is numerical, and for the sake of simplicity, the number will fully occupy several bytes instead of being trimmed half-way to conserve memory space. First case of numerical data is direct representation. Money, HP data are definitely stored in this way. The second case is "pointer" type. Games with complex mechanics will usually have an array of data, and the data refers to the position of the object in the array. If we can extract the list from the game with the correct ordering, step 2 can be carried out easier.

If the targeted data is of boolean type, the data might be compressed instead of wasting a whole byte for boolean type data if the data belongs to the same class. For example, a check list (in PMD Sky - IQ effect).

Some feel-like-boolean-type data are actually numerical data, and for certain value, it will trigger certain in-game event. Data in this kind of representation is very hard to track down because it's usually mistaken for boolean type value and the value is not shown directly in the game. A tip to detect this kind of data representation is the overriding effect (some certain things can't happen at the same time). e.g in Pokemon games, you can't be Burnt and Poisoned at the same time.

The second step involves in tracking down the address of the data and at the same time observe the behaviour of the memory when there is outside interference. (The memory slot allocated to data is usually static,) In most cases, the memory slot allocated to store particular information is static or "lazy", (so we can modify the same address every time we play without having to track the value down again. However,) so it ) We will assume the targeted data is of static or "lazy" type first (because that's how memory slot is used in most cases). With the help of the

Usually, values of the same instance (e.g. HP, stat of the character/the items in bag) are usually situate near to each other. And another tip, although usually useless because there isn't many cases, is that games with almost identical gameplay, etc. with very slight difference (Pokemon D/P) will have the addresses modifying the same data somewhere nearby. In those cases, we can use the view memory function to have a broader view of what other things that it may hold and save our time tracking down the same value again.

d) Pro and con
+ "Soft" hacking requires less time and knowledge than "hard" hacking, which we will cover later.
+ "Soft" hacking won't destroy the game file.
- The highest extent it can destroy is the save game file. (Backup is needed)
- "Soft" hacking can make the game freeze or crashes (backup is needed)
- "Soft" hacking cannot change values that are not in the memory and directly accessed from the game file.
- We have to track down the address of the data every time we start a new session of the game if the memory slot to store the targeted information is of "lazy" type.

3) "Hard" hacking
a) Concept:
"Hard" hacking involves in changing values directly in the game files. People do "hard" hacking to change values that are not loaded into the memory but read directly from the component file instead and to create a completely new game with the basic gameplay from the original game, which will be impossible and cumbersome to implement with soft hacking.

b) Extent
Depending on the level of the "hard" hack (shallow/deep), the end product can be a game that is totally different comparing to the original one. The level of the hack is dependent on the knowledge of the hacker. To do "deep" hack, one must have knowledge about assembly language to actually track down and observe how the game handles the values to create a perticular effect in reality, then manipulate the value to make the game behaves like he/she wants. "Shallow" hack only involves in discovering the position of the information and changing the values without touching too much into the core of the game (which contains intruction to coordinate other component files (how to stream music and control the graphic) or the basic game mechanics (e.g Pokemon Dungeon - rescue password system)).

c) How-to

In this blog post, I'll only discuss how to do "shallow" hacking of a game. To do "shallow" hacking, one must have some basic knowledge about data representation and file structure, and a HEX Editor to peek into the bare actual file that is store on the HDD and edit it. "Shallow" hacking is usually only possible to modify files which is not related to core information about how the game runs. The best target to do "shallow" hacking on is games with size big enough to have separate component files or is a single compressed file containing numerous smaller component files. Simple game with only one executable file will be harder to deep hack because you have to deal with both the instructions and the actual data when exploring the file.

The first step (which may not be needed at all in some cases) involves in extracting the smaller component files from any big file. The easiest example to practice is Zoo Tycoon 1 and 2. The componenent files of the game are actually a compressed file in .zip format. Those files contains smaller component files which are human-readable for text file and other files are usually of recognizable type (jpg, xml). Usuually for other games, the component files are compressed and sometimes encrypted (...[Refer to File Structure]... PK) in an obscure format which can only be interpreted by the instruction in the core component of the game. In these cases, we will need other people's help - or a program written by people who understand the decompressing process to extract the smaller component files inside the compressed file. e.g. for NDS games, we will need ndstool or LazyNDS to decompress the content inside the game.

The second step involves in searching for the component files that is likely to hold information about the target and zero in on the section with information about the target.
If the game has a large number of objects that share the same "classification" and those objects are visible when we play the game (e.g. items) then we should first find the file containing the game script and extract all the name and add numbering to it. This will help us greatly in identifying certain "pointer" value which we mentioned earlier in the "soft" hacking section. To find out which component file holds information about the target, we can identify it via file name. Game developer although doesn't want other people to hack into their game, still have to keep the file name at a discernible level.
We will then open the file and look for any pattern. File containing same size information for each entry will usually has no offset list.
[use monster.md as an example]
For files with different entry size, an offset list is usually provided at the beginning of the file.
[use mappa_s.bin as an example]
The first case will make our hack very easy because since the entry size is fixed, we have a big list of info to compare each byte to tell what kind of information the byte holds and how large the field is. We can also utilize the list with numbering we created earlier to help with the identification of the function. The second case will usually be harder since we don't know exactly how the information is represented. Either case, the best method is to tamper with the information inside the file, re-compress if necessary and play the game to see the difference (trail-and-error method).

d) Pro and con
+ "Hard" hacking can virtually change anything in the game, from value (HP, items, etc.) to behaviour (warp point destination).
- Time consuming - figure out (and in some case reverse-engineer - sound, image) the file structure from scratch while some target value can be easily changed with "soft" hacking.
- Can aslo crash or freeze the game.

The most useful programming language

As a freshman in soc who might be new to programming like me, some of you might wonder what is the most useful programming language to learn. As we are constantly in pursuit of efficiency and effectiveness, we hope we can find one or two such languages from which we can conquer the world of computing science. It is not hard to find the ranking on the Internet like which claims that “Among thousands, 10 programming languages stand out for their job marketability and wide use. If you're looking to boost your career or learn something new, start here. ” (cite from 9 Programming Languages You Should Learn Right Now by Rothberg)
1. PHP
2. C#
3. AJAX (Asynchronous JavaScript and XML)
4. JavaScript
5. Perl
6. C
7. Ruby and Ruby on Rails
8. Java
9. Python
However, if you are following this, you are doing it wrong. Why do I say so? Today technologies are developing at an amazing speed and therefore any programming turn from being populous to being outdated in a shorter and shorter time span. For example, all of those so called “useful” languages — Java, PHP, Ruby, have first appeared in 1995, which means that all the hype that you are supposed to ”know” right now, didn’t even exist just 13 years ago.(Tony, 2008)
In other words, you will never be fast enough to catch up “the fashion”. In contrast, what remain unchanged are the core abstracts, ideas, and skills that are language independent, and that transfer from one syntax to another, like Algorithms, data structures, complexity and math. With these, whenever a new opportunity with new technology comes along, we should be able to get over the learning curve fairly quickly.
That explains why Haoqiang and I are doing the quite outdated “scheme” instead of going to the mainstream of Java. We will talk more in our presentation “Scheme-managing complexity” tomorrow. I hope this passage is helpful to you.

Computer Vision & Interface: Making today’s vision, tomorrow’s reality

vision2

Human computer interaction have transformed over the years from a traditional mouse-keyboard user interaction to 2D windows-based user interfaces with the advent of visual computing technologies. “What about the future?” One may ask. How is human computer interaction going to change? Today in this article, we shall explore the various different kinds of outstanding developments in the field of user experience design which can become ubiquitous over the next years.

 

The Future for Innovative Content: The Cheoptics360 Experience

cheo2 Cheoptics360 paranormal viewing of 3d videos from all angles and perspectives

What Cheoptics360?

Cheoptics360’s free-floating video display surrounds viewers with a 360 degree paranormal outlook of revolving 3d video images that can be seen at every perspective and in all lighting conditions. It utilizes projections from four different 3d projectors and the images projected are then re-assembled and re-generated in a transparent prism-like pyramid chamber that displays these free-floating videos in mid-air.

cheo1 3d holograms of objects and people in commercial tradeshows and presentations

Cheoptics360 in action!

Resembling 3d holograms seen in sci-fi and futuristic movies, such technologies are already being readily deployed in commercial tradeshows, electronic retail stores and events world-wide. Cheoptics360 created a dramatically different user experience and platform for creative product show reels, breaking away from traditional videos and images slideshows by wowing viewers with a truly innovative experience.

Cheoptics360 opens up new horizons and opportunities for content delivery in the future such as games system, television and cinematic experience and educational channels.

Whether it is immersing yourself in a realistic 360 degree battlefield in your favourite First-Person Shooter Games or watching the World Cup from the point of view of your idol Brazilian midfielder, Cheoptics360 promises a vastly different dimension of content viewing, presentation and interactivity we can expect in the near future.

 

Photosynth

Harnessing millions of still images culled from the websites such as Flickr, Facebook and other social networking sites, Photosynth rebuilds breathtaking dreamscapes such as the Notre Dame Cathedral computationally through massive collection of 2d images and mapping them onto a 3d paranormal plane.

clip_image001

PhotoSynth even analyzes images such as a poster picture of Notre Dame Cathedral in its creation of the 3d paranormal plane!

As quoted from the official website of Photosynth, Synths are entirely new visual mediums which analyze and compare each photo for similarities to the others and builds visual model of where the photos are taken before recreating the environment and canvas for display. Users can navigate multi-dimensional spaces with zoom and navigation features.

photosynth

Microsoft PhotoSynth harness collective memory of the masses by culling images from satellite photos and social networking sites such Flickr, Facebook community and re-create them in 3d computational space.

Throughout many decades and evolution of computing technologies, brilliant imageries of advanced interfaces seen in Hollywood futuristic sci-fi or action packed movies like the Iron Man, and Minority Report are perhaps no longer just distant dreams and fertile imagination of scriptwriters far from the reach of humanity.

Computer Vision and visual computing has constantly unlocked new perspectives and perhaps changed the way in which we can view and interact with the environment around us, making today’s vision into tomorrow’s reality.

-
Brought to you by ~
Wong Kheng Leong =]

Visual Computer Seminar Round Up

Sorry for this super late post.. But I believe half of us were busy preparing for tests these 2 weeks so I delayed a little (they just ended BTW, hooray!)

Ok enough for excuses.. This seminar round up is for Visual Computing. Overall, what interest me the most during the seminar was the technique of ray tracing! Ray tracing is a physics base model that simulates light rays, as demonstrated in the picture below:


For instance, this is how does an image look like with and without ray tracing. It is from the popular animation movie, Cars, by Pixar:


Quoted from davidhailes, "Ray-tracing is a type of global illumination like radosity. When light rays hit an object, one of three different things can happen, these are absorption, reflection and refraction. In 3D applications Ray-Tracing creates this illusion. Basically it traces rays (hence the name ray-tracing) from the camera back through the image plane into the scene."

Radiosity is a global illumination algorithm which is used in 3D computer graphics rendering.

And what about illumination? There are 2 types of illumination models namely:

  • Global illumination: models interchange of lights between all surfaces.
  • Local illumination: single light, single surface interaction

and this would be the global illumination image

As you may notice, the isolation between object and another object is less obvious and there's a higher degree of realism with the reflection of sun ray and shades.Global illumination renderers transform local illumination render to a realistic-looking image. The scene is rendered more accurately because as in real life, nothing is isolated. Object are lit from the light source, and then become a source of light themselves. For instance, a green wall will give a green tint to objects that are close to it.

I think the artists are all amazing as according to Mr. Sim, rendering an image could take up to a few days! Those artists must be really passionate and proud or their artwork. But most of the time, I still cannot tell the difference between a rendered image and a real image, that's how close they look like! Out of so many rendered images, I would like to share with you guys one of my favorites (simply b/c it looks so real):

Again, this is a rendered image!

References:

History of JavaScript


Myth:

Most of us would think that IE was the main browser that javascript started out with.

I decided to went googling about javascript to find out its orgin. To my surprise javascript was first used in NETSCAPE. Unfortunately for this once dominant web browser , its usage has almost disappear.

What i found out was that javascript did not start out as a language itself. Apparently during the early 1990s, webpages were mostly static and unappealing. Brendan Eich of Netscape created Mocha (later renamed to LiveScript) was created to make it more interactive. Netscape integrated Livescript inside its browser hence it does not required the code to be compiled. At the same time another language, Java which has became quite popular with the Netscape started appearing although it needed a plugin. Netscape decided to cash in on that by adding in java technology and rename it to Javascript. The naming of the script was confusing as it is gave the impression it was a spin of from java. Many characterized it as a marketing ploy by Netscape. Java script despite its name is not relevant to java programming. Its key design principle within javascript is derived from scheme and self programming. It is how amazing all the languages are linked together one way or another.

Other variation like DHTML(HTML + Javascript) allows more interactive webpages to be created.



here are some free script you can use for your webpages to make it more interactive.

However, because javascript is very interactive. It allows some control to be done by user site.
Click on the photo below to see whats the different. There are a lot of websites which shows you how this codes are done.



Wednesday, November 11, 2009

Pre-reading of Scheme for Friday’s presentation


As Friday is the last presentation for fmc1202, Liu Liu and I have prepared a great and content introduction on scheme for you. Before the presentation, I would like to write something about scheme’s history and basic concepts. These topics are to make you know something about scheme’s history and how to write a basic code. We will not cover most of the things below during our presentation. So if can read before the presentation, you can feel more about the powerful and interesting about scheme during the presentation.

Scheme is one of the two main dialects of the programming language Lisp. Unlike Common Lisp, the other main dialect, Scheme follows a minimalist design philosophy specifying a small standard core with powerful tools for language extension. Its compactness and elegance have made it popular with educators, language designers, programmers, implementers, and hobbyists, and this diverse appeal is seen as both strength and, because of the diversity of its constituencies and the wide divergence between implementations, one of its weaknesses.

Scheme was developed at the MIT AI Lab by Guy L. Steele and Gerald Jay Sussman who introduced it to the academic world via a series of memos, now referred to as the Lambda Papers, over the period 1975-1980. One of the most interesting fact in SOC is that so far all of our Profs who taught or still teaching scheme is graduated from MIT. And you can get all resources about the course free on MIT website.



I wrote some conclusions about scheme, but I found mine is much less clear than the introductions in SICP book. Now I just copy some of the most important concepts for you and hope you can know scheme in a very short time.

1.1 The Elements of Programming
A powerful programming language is more than just a means for instructing a computer to perform tasks. The language also serves as a framework within which we organize our ideas about processes. Thus, when we describe a language, we should pay particular attention to the means that the language provides for combining simple ideas to form more complex ideas. Every powerful language has three mechanisms for accomplishing this:
primitive expressions, which represent the simplest entities the language is concerned with,
means of combination, by which compound elements are built from simpler ones, and
means of abstraction, by which compound elements can be named and manipulated as units.
In programming, we deal with two kinds of elements: procedures and data. (Later we will discover that they are really not so distinct.) Informally, data is ``stuff'' that we want to manipulate, and procedures are descriptions of the rules for manipulating the data. Thus, any powerful programming language should be able to describe primitive data and primitive procedures and should have methods for combining and abstracting procedures and data.
In this chapter we will deal only with simple numerical data so that we can focus on the rules for building procedures.4 In later chapters we will see that these same rules allow us to build procedures to manipulate compound data as well.
1.1.1 Expressions
One easy way to get started at programming is to examine some typical interactions with an interpreter for the Scheme dialect of Lisp. Imagine that you are sitting at a computer terminal. You type an expression, and the interpreter responds by displaying the result of its evaluating that expression.
One kind of primitive expression you might type is a number. (More precisely, the expression that you type consists of the numerals that represent the number in base 10.) If you present Lisp with a number
486
the interpreter will respond by printing5
486
Expressions representing numbers may be combined with an expression representing a primitive procedure (such as + or *) to form a compound expression that represents the application of the procedure to those numbers. For example:
(+ 137 349)486(- 1000 334)666(* 5 99)495(/ 10 5)2(+ 2.7 10)12.7
Expressions such as these, formed by delimiting a list of expressions within parentheses in order to denote procedure application, are called combinations. The leftmost element in the list is called the operator, and the other elements are called operands. The value of a combination is obtained by applying the procedure specified by the operator to the arguments that are the values of the operands.
The convention of placing the operator to the left of the operands is known as prefix notation, and it may be somewhat confusing at first because it departs significantly from the customary mathematical convention. Prefix notation has several advantages, however. One of them is that it can accommodate procedures that may take an arbitrary number of arguments, as in the following examples:
(+ 21 35 12 7)75(* 25 4 12)1200
No ambiguity can arise, because the operator is always the leftmost element and the entire combination is delimited by the parentheses.
A second advantage of prefix notation is that it extends in a straightforward way to allow combinations to be nested, that is, to have combinations whose elements are themselves combinations:
(+ (* 3 5) (- 10 6))19
There is no limit (in principle) to the depth of such nesting and to the overall complexity of the expressions that the Lisp interpreter can evaluate. It is we humans who get confused by still relatively simple expressions such as
(+ (* 3 (+ (* 2 4) (+ 3 5))) (+ (- 10 7) 6))
which the interpreter would readily evaluate to be 57. We can help ourselves by writing such an expression in the form
(+ (* 3 (+ (* 2 4) (+ 3 5))) (+ (- 10 7) 6))
following a formatting convention known as pretty-printing, in which each long combination is written so that the operands are aligned vertically. The resulting indentations display clearly the structure of the expression.6
Even with complex expressions, the interpreter always operates in the same basic cycle: It reads an expression from the terminal, evaluates the expression, and prints the result. This mode of operation is often expressed by saying that the interpreter runs in a read-eval-print loop. Observe in particular that it is not necessary to explicitly instruct the interpreter to print the value of the expression.7
1.1.2 Naming and the Environment
A critical aspect of a programming language is the means it provides for using names to refer to computational objects. We say that the name identifies a variable whose value is the object.
In the Scheme dialect of Lisp, we name things with define. Typing
(define size 2)
causes the interpreter to associate the value 2 with the name size.8 Once the name size has been associated with the number 2, we can refer to the value 2 by name:
size2(* 5 size)10
Here are further examples of the use of define:
(define pi 3.14159)(define radius 10)(* pi (* radius radius))314.159(define circumference (* 2 pi radius))circumference62.8318
Define is our language's simplest means of abstraction, for it allows us to use simple names to refer to the results of compound operations, such as the circumference computed above. In general, computational objects may have very complex structures, and it would be extremely inconvenient to have to remember and repeat their details each time we want to use them. Indeed, complex programs are constructed by building, step by step, computational objects of increasing complexity. The interpreter makes this step-by-step program construction particularly convenient because name-object associations can be created incrementally in successive interactions. This feature encourages the incremental development and testing of programs and is largely responsible for the fact that a Lisp program usually consists of a large number of relatively simple procedures.
It should be clear that the possibility of associating values with symbols and later retrieving them means that the interpreter must maintain some sort of memory that keeps track of the name-object pairs. This memory is called the environment (more precisely the global environment, since we will see later that a computation may involve a number of different environments).9
1.1.3 Evaluating Combinations
One of our goals in this chapter is to isolate issues about thinking procedurally. As a case in point, let us consider that, in evaluating combinations, the interpreter is itself following a procedure.
To evaluate a combination, do the following:
1. Evaluate the subexpressions of the combination.
2. Apply the procedure that is the value of the leftmost subexpression (the operator) to the arguments that are the values of the other subexpressions (the operands).
Even this simple rule illustrates some important points about processes in general. First, observe that the first step dictates that in order to accomplish the evaluation process for a combination we must first perform the evaluation process on each element of the combination. Thus, the evaluation rule is recursive in nature; that is, it includes, as one of its steps, the need to invoke the rule itself.10
Notice how succinctly the idea of recursion can be used to express what, in the case of a deeply nested combination, would otherwise be viewed as a rather complicated process. For example, evaluating
(* (+ 2 (* 4 6)) (+ 3 5 7))
requires that the evaluation rule be applied to four different combinations. We can obtain a picture of this process by representing the combination in the form of a tree, as shown in figure 1.1. Each combination is represented by a node with branches corresponding to the operator and the operands of the combination stemming from it. The terminal nodes (that is, nodes with no branches stemming from them) represent either operators or numbers. Viewing evaluation in terms of the tree, we can imagine that the values of the operands percolate upward, starting from the terminal nodes and then combining at higher and higher levels. In general, we shall see that recursion is a very powerful technique for dealing with hierarchical, treelike objects. In fact, the ``percolate values upward'' form of the evaluation rule is an example of a general kind of process known as tree accumulation.

Figure 1.1: Tree representation, showing the value of each subcombination.
Next, observe that the repeated application of the first step brings us to the point where we need to evaluate, not combinations, but primitive expressions such as numerals, built-in operators, or other names. We take care of the primitive cases by stipulating that
the values of numerals are the numbers that they name,
the values of built-in operators are the machine instruction sequences that carry out the corresponding operations, and
the values of other names are the objects associated with those names in the environment.
We may regard the second rule as a special case of the third one by stipulating that symbols such as + and * are also included in the global environment, and are associated with the sequences of machine instructions that are their ``values.'' The key point to notice is the role of the environment in determining the meaning of the symbols in expressions. In an interactive language such as Lisp, it is meaningless to speak of the value of an expression such as (+ x 1) without specifying any information about the environment that would provide a meaning for the symbol x (or even for the symbol +). As we shall see in chapter 3, the general notion of the environment as providing a context in which evaluation takes place will play an important role in our understanding of program execution.
Notice that the evaluation rule given above does not handle definitions. For instance, evaluating (define x 3) does not apply define to two arguments, one of which is the value of the symbol x and the other of which is 3, since the purpose of the define is precisely to associate x with a value. (That is, (define x 3) is not a combination.)
Such exceptions to the general evaluation rule are called special forms. Define is the only example of a special form that we have seen so far, but we will meet others shortly. Each special form has its own evaluation rule. The various kinds of expressions (each with its associated evaluation rule) constitute the syntax of the programming language. In comparison with most other programming languages, Lisp has a very simple syntax; that is, the evaluation rule for expressions can be described by a simple general rule together with specialized rules for a small number of special forms.11
1.1.4 Compound Procedures
We have identified in Lisp some of the elements that must appear in any powerful programming language:
Numbers and arithmetic operations are primitive data and procedures.
Nesting of combinations provides a means of combining operations.
Definitions that associate names with values provide a limited means of abstraction.
Now we will learn about procedure definitions, a much more powerful abstraction technique by which a compound operation can be given a name and then referred to as a unit.
We begin by examining how to express the idea of ``squaring.'' We might say, ``To square something, multiply it by itself.'' This is expressed in our language as
(define (square x) (* x x))
We can understand this in the following way:
(define (square x) (* x x)) To square something, multiply it by itself.
We have here a compound procedure, which has been given the name square. The procedure represents the operation of multiplying something by itself. The thing to be multiplied is given a local name, x, which plays the same role that a pronoun plays in natural language. Evaluating the definition creates this compound procedure and associates it with the name square.12
The general form of a procedure definition is
(define ( ) )
The is a symbol to be associated with the procedure definition in the environment.13 The are the names used within the body of the procedure to refer to the corresponding arguments of the procedure. The is an expression that will yield the value of the procedure application when the formal parameters are replaced by the actual arguments to which the procedure is applied.14 The and the are grouped within parentheses, just as they would be in an actual call to the procedure being defined.
Having defined square, we can now use it:
(square 21)441(square (+ 2 5))49(square (square 3))81
We can also use square as a building block in defining other procedures. For example, x2 + y2 can be expressed as
(+ (square x) (square y))
We can easily define a procedure sum-of-squares that, given any two numbers as arguments, produces the sum of their squares:
(define (sum-of-squares x y) (+ (square x) (square y)))(sum-of-squares 3 4)25
Now we can use sum-of-squares as a building block in constructing further procedures:
(define (f a) (sum-of-squares (+ a 1) (* a 2)))(f 5)136
Compound procedures are used in exactly the same way as primitive procedures. Indeed, one could not tell by looking at the definition of sum-of-squares given above whether square was built into the interpreter, like + and *, or defined as a compound procedure.
1.1.5 The Substitution Model for Procedure Application
To evaluate a combination whose operator names a compound procedure, the interpreter follows much the same process as for combinations whose operators name primitive procedures, which we described in section 1.1.3. That is, the interpreter evaluates the elements of the combination and applies the procedure (which is the value of the operator of the combination) to the arguments (which are the values of the operands of the combination).
We can assume that the mechanism for applying primitive procedures to arguments is built into the interpreter. For compound procedures, the application process is as follows:
To apply a compound procedure to arguments, evaluate the body of the procedure with each formal parameter replaced by the corresponding argument.
To illustrate this process, let's evaluate the combination
(f 5)
where f is the procedure defined in section 1.1.4. We begin by retrieving the body of f:
(sum-of-squares (+ a 1) (* a 2))
Then we replace the formal parameter a by the argument 5:
(sum-of-squares (+ 5 1) (* 5 2))
Thus the problem reduces to the evaluation of a combination with two operands and an operator sum-of-squares. Evaluating this combination involves three subproblems. We must evaluate the operator to get the procedure to be applied, and we must evaluate the operands to get the arguments. Now (+ 5 1) produces 6 and (* 5 2) produces 10, so we must apply the sum-of-squares procedure to 6 and 10. These values are substituted for the formal parameters x and y in the body of sum-of-squares, reducing the expression to
(+ (square 6) (square 10))
If we use the definition of square, this reduces to
(+ (* 6 6) (* 10 10))
which reduces by multiplication to
(+ 36 100)
and finally to
136
The process we have just described is called the substitution model for procedure application. It can be taken as a model that determines the ``meaning'' of procedure application, insofar as the procedures in this chapter are concerned. However, there are two points that should be stressed:
The purpose of the substitution is to help us think about procedure application, not to provide a description of how the interpreter really works. Typical interpreters do not evaluate procedure applications by manipulating the text of a procedure to substitute values for the formal parameters. In practice, the ``substitution'' is accomplished by using a local environment for the formal parameters. We will discuss this more fully in chapters 3 and 4 when we examine the implementation of an interpreter in detail.
Over the course of this book, we will present a sequence of increasingly elaborate models of how interpreters work, culminating with a complete implementation of an interpreter and compiler in chapter 5. The substitution model is only the first of these models -- a way to get started thinking formally about the evaluation process. In general, when modeling phenomena in science and engineering, we begin with simplified, incomplete models. As we examine things in greater detail, these simple models become inadequate and must be replaced by more refined models. The substitution model is no exception. In particular, when we address in chapter 3 the use of procedures with ``mutable data,'' we will see that the substitution model breaks down and must be replaced by a more complicated model of procedure application.15
Applicative order versus normal order
According to the description of evaluation given in section 1.1.3, the interpreter first evaluates the operator and operands and then applies the resulting procedure to the resulting arguments. This is not the only way to perform evaluation. An alternative evaluation model would not evaluate the operands until their values were needed. Instead it would first substitute operand expressions for parameters until it obtained an expression involving only primitive operators, and would then perform the evaluation. If we used this method, the evaluation of
(f 5)
would proceed according to the sequence of expansions
(sum-of-squares (+ 5 1) (* 5 2))(+ (square (+ 5 1)) (square (* 5 2)) )(+ (* (+ 5 1) (+ 5 1)) (* (* 5 2) (* 5 2)))
followed by the reductions
(+ (* 6 6) (* 10 10))(+ 36 100) 136
This gives the same answer as our previous evaluation model, but the process is different. In particular, the evaluations of (+ 5 1) and (* 5 2) are each performed twice here, corresponding to the reduction of the expression
(* x x)
with x replaced respectively by (+ 5 1) and (* 5 2).
This alternative ``fully expand and then reduce'' evaluation method is known as normal-order evaluation, in contrast to the ``evaluate the arguments and then apply'' method that the interpreter actually uses, which is called applicative-order evaluation. It can be shown that, for procedure applications that can be modeled using substitution (including all the procedures in the first two chapters of this book) and that yield legitimate values, normal-order and applicative-order evaluation produce the same value. (See exercise 1.5 for an instance of an ``illegitimate'' value where normal-order and applicative-order evaluation do not give the same result.)
Lisp uses applicative-order evaluation, partly because of the additional efficiency obtained from avoiding multiple evaluations of expressions such as those illustrated with (+ 5 1) and (* 5 2) above and, more significantly, because normal-order evaluation becomes much more complicated to deal with when we leave the realm of procedures that can be modeled by substitution. On the other hand, normal-order evaluation can be an extremely valuable tool, and we will investigate some of its implications in chapters 3 and 4.16
1.1.6 Conditional Expressions and Predicates
The expressive power of the class of procedures that we can define at this point is very limited, because we have no way to make tests and to perform different operations depending on the result of a test. For instance, we cannot define a procedure that computes the absolute value of a number by testing whether the number is positive, negative, or zero and taking different actions in the different cases according to the rule
This construct is called a case analysis, and there is a special form in Lisp for notating such a case analysis. It is called cond (which stands for ``conditional''), and it is used as follows:
(define (abs x) (cond ((> x 0) x) ((= x 0) 0) ((<> ) ( ) ( ))
consisting of the symbol cond followed by parenthesized pairs of expressions (

) called clauses. The first expression in each pair is a predicate -- that is, an expression whose value is interpreted as either true or false.17
Conditional expressions are evaluated as follows. The predicate is evaluated first. If its value is false, then is evaluated. If 's value is also false, then is evaluated. This process continues until a predicate is found whose value is true, in which case the interpreter returns the value of the corresponding consequent expression of the clause as the value of the conditional expression. If none of the

's is found to be true, the value of the cond is undefined.
The word predicate is used for procedures that return true or false, as well as for expressions that evaluate to true or false. The absolute-value procedure abs makes use of the primitive predicates >, <, and =.18 These take two numbers as arguments and test whether the first number is, respectively, greater than, less than, or equal to the second number, returning true or false accordingly.
Another way to write the absolute-value procedure is
(define (abs x) (cond ((< name="%_idx_416">Else is a special symbol that can be used in place of the

in the final clause of a cond. This causes the cond to return as its value the value of the corresponding whenever all previous clauses have been bypassed. In fact, any expression that always evaluates to a true value could be used as the

here.
Here is yet another way to write the absolute-value procedure:
(define (abs x) (if (< name="%_idx_420">This uses the special form if, a restricted type of conditional that can be used when there are precisely two cases in the case analysis. The general form of an if expression is
(if )
To evaluate an if expression, the interpreter starts by evaluating the part of the expression. If the evaluates to a true value, the interpreter then evaluates the and returns its value. Otherwise it evaluates the and returns its value.19
In addition to primitive predicates such as <, =, and >, there are logical composition operations, which enable us to construct compound predicates. The three most frequently used are these:
(and ... )
The interpreter evaluates the expressions one at a time, in left-to-right order. If any evaluates to false, the value of the and expression is false, and the rest of the 's are not evaluated. If all 's evaluate to true values, the value of the and expression is the value of the last one.
(or ... )
The interpreter evaluates the expressions one at a time, in left-to-right order. If any evaluates to a true value, that value is returned as the value of the or expression, and the rest of the 's are not evaluated. If all 's evaluate to false, the value of the or expression is false.
(not )
The value of a not expression is true when the expression evaluates to false, and false otherwise.
Notice that and and or are special forms, not procedures, because the subexpressions are not necessarily all evaluated. Not is an ordinary procedure.
As an example of how these are used, the condition that a number x be in the range 5 <> x 5) (< name="%_idx_470">(define (>= x y) (or (> x y) (= x y)))
or alternatively as
(define (>= x y) (not (<>


Now, this is the end. So far you know all basic concepts in scheme and you can write you own code.
To download and learn more about scheme, you can go to: http://www.plt-scheme.org/

Round-up for Amazon seminar

Some of the previous lectures got involved with professional knowledge, but this lecture is mostly related to complex, specialized knowledge and most likely to be the most professional one. The part about Amazon mainly covers how Amazon deals with the huge amount of data.

Amazon runs a world-wide e-commerce platform that serves tens of millions customers at peak times using tens of thousands of servers located in many data centers around the world. There are strict operational requirements on Amazon’s platform in terms of performance, reliability and efficiency (Dynamo: Amazon’s Highly Available Key-value Store).The Traditional Cloud Data (Cloud computing is Internet based development and use of computer technology) Services are traditionally oriented around Relational Database systems. However, Traditional RDBMS clouds are expensive to maintain, license and store large amounts of data. The solution seems to downgrade some of the service guarantees of traditional RDBMS so that Amazon applies Amazon’s Dynamo (Amazon storage system). According to Wikipedia, dynamo (storage system) is a highly available, proprietary key-value storage system. It has properties of both databases and distributed hash tables (DHTs). It is not directly exposed as a web service, but is used to power parts of other Amazon Web Services. The main advantage of Dynamo is that its client applications can tune the values of N, R and W to achieve their desired levels of performance, availability and durability. The system is used to support many of the most critical elements of Amazon's operation, including shopping-cart processing. Dynamo, as an alternative to rigid relational database systems, has been the underlying storage technology for a number of the core services in Amazon’s e-commerce platform. It was able to scale to extreme peak loads efficiently without any downtime during the busy holiday shopping season (Inside Amazon's dynamo, 2007). And it offers a simple Primary-key based data model and stores vast amounts of information on distributed, low-cost virtualized nodes. The motivation of dynamo is to build a distributed storage system which consists of four factors: scale simple, key-value highly and available and Guarantee Service Level Agreements (SLA).

Even after the lecture and the research for the roundup, I am still confused about some parts of the system, but I can find out that Amazon has put great effort to handle the large amount of data. More important, it succeeds and I think it one of the reasons why Amazon becomes one of the largest e-commerce operations in the world. Actually, I have already done some shopping in Amazon in China, like books, accessories and such stuff. I choose Amazon is because of the convenient, fine shopping booklet which consist of different kinds of goods every month from Amazon, not just because it is cheaper or something else. My point is that Amazon has many advantages over other e-commerce operations and we have many reasons to choose it.
References
Dynamo: Amazon’s Highly Available Key-value Store. (n.d. ). Retrieved November 7, 2009, from http://s3.amazonaws.com/AllThingsDistributed/sosp/amazon-dynamo-sosp2007.pdf
Dynamo (storage system). (n.d.). Wikipedia, the free encyclopedia. Retrieved November 7, 2009,
from
http://en.wikipedia.org/wiki/Dynamo_(storage_system)
Inside Amazon's dynamo. (October 03, 2007). Retrieved November 7, 2009, from
http://www.roughtype.com/archives/2007/10/inside_amazons.php


By Zhu Li and Zhang Haoqiang

I love FMC1202

I still need 1 point to meet the requirement to pass FMC1202. The visual computing seminar again let me know the wizard of the computer technology. I felt so amazed that I want to write a blog post about it. However, I realize that I should do something more meaningful for the last blog post. Thus, I changed my mind and I will write my feeling of taking the FMC1202 module.
When I select the modules through the CORS, my friend told me that there was a new module only open for freshmen. I thought that I shouldn’t lose this chance, otherwise I will lose it forever. Then my friend told me that this module got no exam and it was not related to the CAP. Great! If I didn’t take this module, I should be crazy. I like the feeling without stress and I thought that this module would not stress me and I can enjoy myself. However, I found that this was not the truth. Though FMC1202 haven’t taken me too much time and I like the content of it, I still suffered when I did the presentations. I am always nervous when deliver speeches in the public and I can’t do it well. However, Damith said that we had to do at least two presentations (In fact, I did three presentations this semester for this module). I felt scared at first, but after some practice, I do a good one out of the three presentations. From this, I learn a lot. I heard that in the following years of study, I will get many projects and presentations. This module helps me to improve my skills at this aspect and I get more chance to survive from the university study.
For the following part, I will write something about the whole seminars. The first seminar is A Brief History of Computing (by Micheal Brown). It is a good start for this module. It is interesting and easy to be understood as it didn’t have too much odd theory. The change of the definition of computer is interesting, it is really amazing that the definition in Oxford English Dictionary (1955) is “A person who makes calculations or computations; a calculator, a reckoner; spec. a person employed to make calculations in an observatory, in surveying, etc”. in addition, I like the what Prof. Edsger W. Dijkstra said, “Computer science is no more about computers than astronomy is about telescopes”. Having invented much of the technology of software, Dijkstra eschewed the use of computers in his own work for many decades. Almost all EWDs appearing after 1972 were hand-written. When lecturing, he would write proofs in chalk on a blackboard rather than using overhead foils, let alone Powerpoint slides. Even after he succumbed to his UT colleagues’ encouragement and acquired a Macintosh computer, he used it only for e-mail and for browsing the World Wide Web. Really amazing! How come a computer scientist avoided using computer?
The second seminar is A First Look at Second Life. The seminar brought us to a virtual world, but it almost contains everything: the Mysterious underwater world, the Da Vinci’s museum the beautiful NUS campus and other famous university, etc. This gave us the idea about what the new world would be. We also experience to dress ourselves in the Second Life. Some of us dressed sexy, some of us dressed like an animals and some boys dressed like a girl. Another amazing thing is that we can own our cars. The car is cool and we get it freely.
The third seminar is How Youtube Works. After the seminar, I know more about the principle of how Youtube works and the problems the Youtube met and how they solve them. One thing is the Youtube spent a lot of money on buying the bandwidth. Because of this, the Youtube came up with a compression algorithm without little sacrifice of the quality. Another thing is the copyright issue. As some people may upload videos that violate the copyright of others’, Youtube has to take the risks to be charged. For this, they came up with video fingerprint so that they can find the videos which violate the others’ copyright and remove them.
The forth seminar is the Viruses R us. This seminar is quite interesting and except showing me how the viruses work, it showed me the power of virtual machine. The viruses Mr. Liang showed us are interesting and from that we knew the origin of viruses. A so called hacker nowadays coded a virus just to show his talent, not to destroy and filch information. To some extent, the hacker at that time is cute and is helpful for the development of computer technology. Besides, Mr. Liang also gave us suggestions on how to protect our computers. I think the most simple but quite useful idea is “Don’t log in as administrator unless necessary”. Another useful method is using the virtual machine. This method works well because even your virtual machine is destroyed, it will not affect your host operating system and you can back up to the state when your virtual machine can work well easily.
The fifth seminar is Search engines. The seminar mainly covered three things: Web phenomenon, Web statistics and Search engines. The web phenomenon showed the websites and information increased in such a rapid speed that there is need for search engines to search and get the useful information. The web statistics part showed us the search engines developed very fast and now they are the most popular websites. The Search engines part showed how search engines work. The use web crawlers to collect information on the page and they came up with an algorithm to rank the page so that user and get useful information.
The sixth seminar is invisible software. This seminar broad my knowledge and let me realize that idea is too narrow. Mrs Tulika Mitra showed a smartshirt which can sends an alarm when a child stops breathing. From this I know the invisible software is everywhere. In addition, the invisible software is really powerful. It can be used to design automotive elctronics, like cars and it is an important part in the F1 team’s vehicle.
The seventh seminar is Wizards of Os. This topic contains two parts. In the first part, we played with different OS, like Windows 7, Mac OS X, Linux and OpenSolaris. It is really interesting and I found that the Linux really good. In the second part, we know the detail about the OS. I really like this seminar. I must say I just treat as a tool and I never try to know more about the computer itself. However, this seminar helped me a lot. From the seminar, I know what the OS really is and how it works. As I already write a blog post for it, I won’t discuss it a lot here.
The eighth seminar is Google AND AMAZON - How does it work under the hood? As I didn’t following the lecturer and it seemed hard to be understood, I hadn’t got an idea of the lecture.
The ninth lecture is visual computing. I must say this is really amazing seminar. How can using software to do such a real office. The seminar mainly talked about four good ideas, models, collecting lots of data, bringing more knowledge to bear and machine learning, and two bad ideas, imitating human vision and pushing math too far. These ideas are really helpful and brought me to an amazing world.
The last seminar is Weird Math behind JavaScript Programming. It is really amazing that programming language can be so simple. The fact is shown again that computer scientists are a group of strange guys. The lecturer kept making the programming simpler. He successfully proved that just use the function definition and application can solve everything. Lambda Calculus is really amazing and simple. The simplest is the most beautiful.
This is the whole seminar. I hope that it will be more modules like this so that I can broad my knowledge and know more about the whole computer technology.

Followers