IBM Ventures Advances Corporate Goals On 3 Pillars        
Platforms - We provide access to IBM assets such as Watson, data science capabilities, cloud resources, security protocols, IoT platforms. IBM is also ...
          IBM makes a deep learning breakthrough        
IBM announced PowerAI DDL on Tuesday, a new technique designed to reduce the time it takes to train distributed deep learning systems, according ...
          RE[3]: Apple just released their new budget PC        
Well, considering those models are 5 years old, of course they (how the hell would they ship an iMac without a screen....was it a cube with a gaping hole in it???) are going to include CRTs and not LCDs. That'd be like saying the 386's from IBM back in 1992 were expensive because you can buy much faster computers from Apple in 2006.
          RE[2]: Not Good Pricing At All        
" Apple knows their pricing and their market." Obviously not; Apple bombarded us with the "switch to mac" since OSX came out and they just admitted in their last WWDC that only 50% of their new sellings were for Winodws customers; from this we can see that Apple cares about windows users -unlike you-, while they cannot reach the heart and dollar of them because they don't listen quickly to such wishes that they even promote you to send on their web site. "now if theyd just put an intel chip in it:" Actually, it wasn't customers wish for intel chip but rather Apples choice to get rid of IBM ignorance and disrespect to Apple's Buisness demands especially for laptops product line. "get a job, dump or get off the pot and just get one." You are either amature for what you said or you are a frustrated Apple's customer support guy. There is a wise say for you: Those who don't believe FACTS, must suffer till they respect them.
          Guide for Picking The Best Android Phone for You        
Sony Xperia X10 vs Nexus One vs Motorola Droid vs Acer Liquid vs Archos

Xperia X10

Nexus One

Motorola Droid

Acer Liquid

(Updated: 21st Jan 2010) The Android handset landscape has changed drastically over the past year, from a literal handful of options to – the fingers on both your hands, the toes on both your feet and all the mistresses Tiger Woods has had in the past 24 hours (OK, maybe 4 hours). You get the point though, there are quite a few options and through the course of 2010 these options will only increase.

The only other mainstream handset smartphone option that rivals the Android handset options available in 2010 will be the Windows mobile platform – and we're all rushing for it – not!

So what are the handsets to consider in 2010? The ones currently released on the market that we will look at are the Acer Liquid and Motorola Droid and an additional three to be released early 2010, the Sony Xperia X10, Google Nexus One (Passion, HTC Bravo) and Archos Phone Tablet – though we only have a handful of details on the phone.

Archos Phone

We will look at hardware and software sub-categories, and compare the phones based based on the information we have.



The Nexus One and Sony Xperia X10 have the snappier Qualcomm Snapdragon 1Ghz processor onboard. The Acer Liquid has a downclocked version of the Snapdragon running at 728Mhz – perhaps to conserve battery. This would probably put the Acer Liquids performance more on par with the Motorola Droids. The Archos Phone promises to be a really fast phone with an upgraded ARM Cortex processor running at 1Ghz and also with improved GPU over Droid and iPhone.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Qualcomm Snapdragon QSD 8250, 1.0 GHz

Texas Instruments OMAP 3430 550 Mhz

Qualcomm Snapdragon QSD 8250, 1.0 GHz

Qualcomm Snapdragon QSD 8250, 768 MHz

ARM Cortex 1Ghz


The Snapdragon's Adreno 200 Graphics core is phenomenal on the triangle render benchmark, coming in with a score of approximately 22 million triangles per/sec compared to approximately 7 million triangles/sec on the Motorola's SGX530. This is an important element for 3D graphics. Interestingly, the iPhone 3GS has a similar CPU to Motorola Droid but an upgraded faster SGX535 GPU which is capable of 28 million triangles/sec and 400 M Pixels/sec. Archos may get better SGX GPU.

Xperia X-10 Graphics Demo

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Adreno 200 Graphics Core with OpenGLES 2.0

PowerVR SGX530 Graphics Core with OpenGLES 2.0

Adreno 200 Graphics Core with OpenGLES 2.0

Adreno 200 Graphics Core with OpenGLES 2.0

PowerVR SGX540?

22 M Triangles/sec

7 M Triangles/sec

22 M Triangles/sec

22 M Triangles/sec

35 M Triangles/sec

133 M Pixels/sec

250 M Pixels/sec

133 M Pixels/sec

133 M Pixels/sec

1000 M Pixels/sec

HD Decode (720p)

HD Decode (720p)

HD Decode (720p)

HD Decode (720p)

3-D Graphics Benchmark

Motorola Droid 20.7 FPS (Android 2.0).

Nexus One 27.6 FPS. (Android 2.1)

Acer Liquid 34 FPS. (Android 1.6)

Xperia X10 34FPS+ est. (Android 1.6)

Note: All phones tested running WVGA resolution 480 x 800 or 480 X 854. Different versions of Android will be a factor e.g. Android 2.0 + reproduces 16 million colors vs 56K for 1.6. Older phones such as G1, iPhone 3GS may score 25-30 FPS but they use lower 480 X 320 resolution.


The Nexus One comes in with an impeccable 512MB of RAM. This provides an element of future proofing for the hardware and puts it in a league of its own. The Xperia X10 comes with 1GB of ROM and 384 MB of RAM. The 1GB means you'll be able to have twice as many apps on your phone until Google lets you save on your removable memory. The Acer Liquid and Droid are more or less the same.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone


512 MB

256 MB

384 MB

256 MB


512 MB

512 MB

1024 MB

512 MB


The Nexus One uses an AMOLED screen which provides crispy images and more saturated colors than a TFT-LCD. It's also more energy efficient. Xperia X10 packs a 4.0 inch TFT screen with 854 x 480 resolution. Expect similar picture quality to the Motorola Droid for the Sony Ericson phone. The Archos Phone promises to deliver an interesting experience that could potentially make it the King of Androids.

Spot the difference: Top TFT-LCD screen and bottom OLED

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

800 x 480 px, 3.7 in (94 mm), WVGA,


854 x 480 px, 3.7 in (94 mm), WVGA,


854 x 480 px, 4.0 in (102 mm), WVGA,


800 x 480 px, 3.5 in (89 mm), WVGA,


854x 480px, 4.3 in (109mm), WVGA, AMOLED

Display Input

All standard stuff here. All are pretty much Capacitative with multi-Touch depending on the continent you buy your phone from.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Capacitative, Multi-Touch

Capacitative, Multi-Touch

Capacitative, Multi-Touch

Capacitative, Multi-Touch

Capacitative, Multi-Touch


The Xperia X10 has the largest battery – and might I add likely the best quality battery from the lot. It's the same battery used in the Xperia X1 and it performed admirable. Talk time for the Nexus One is very good and we expect the Xperia X10 to match this or be marginally better. Of concern is Nexus Ones 3G stand-by time of 250 hours. It's worse than the other phones but not bad at a little over 10 days! Updated 21st Jan 2010 - confirmed Xperia battery times. Xperia more or less performs at the same level as the other Android phones, delivering 5 hours talk time.

Sony 1500 mAh Battery

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone


1400 Li-Po

1400 Li-Po

1500 Li-Po

1350 Li-Po

Talk/Standby 3G







The phones are all capable of 3.5G (HSDPA 7.2 Mbit/s) data transfer. The Motorola Droid and Sony Xperia X10 can give you a little bit extra supporting 10.2 Mbit/s data transfer. Obviously the network must exist to support these speeds. Motorola is the only one with Class 12 EDGE, but this is not too important in this day and age of 3G.

Nexus One, Bravo

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

HSDPA (Mbit/s)

7.2 (1700 band)





2.0 - 5.76





(850, 900,1800,1900)






Class 10

Class 12

Class 10

Class 10

UMTS band 1/4/8

















Nexus One is the only Android phone that currently offers 802.11n connectivity. In fact, I can't think of any other phone out there that also has 802.11n. This might be the Google Talk phone we all thought was heading our way after all! All phones have either bluetooth 2.0 or 2.1. These will essentially be the same as far as data transfer (3 Mbit/s) is concerned. Version 2.1 offers better power efficiency though and a few other enhancements.

Nexus One - Broadcom 802.11n

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone


2.1 + EDR

2.1 + EDR

2.1 + EDR

2.0 + EDR


802.11 b






802.11 g






802.11 n







The 2GB shipped micro-SD card with the Acer Liquid is unrealistic by todays standards. The Motorola Droid offers the best deal with a 16GB micro-SD. The Sony Xperia X10 is shipped with an 8GB micro-SD card, but remember the Xperia X10 also has that slightly bigger 1GB flash memory on-board as well for and impressive total of 9GB expandable to a total of 33GB. Google decided to save on costs by only offering a 4GB micro-SD card with the Nexus One, but if the idea is to compete against the iPhone then 8GB should be the minimum. Clearly the Motorola is on the right track with 16GB shipped, and you can't ignore the impressive 1GB ROM on the Xperia X10.

SanDisk working on 128GB Micro-SD

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Sim Card






3.5 mm jack






Micro USB





Shipped Micro SD/Supported (GB)


Class 2


Class 6




Class 2


Light Sensor





Proximity Sensor















Cell/Wifi Positioning






Case Material

The Motorola metal case is the sturdiest. Build quality for the Nexus One and Xperia X10 is very good. The Xperia X10 has a refelective plastic whilst the Nexus one is more industrial with teflon and metal on the bottom. Acer Liquid has average build quality, but that was always the intention with the Liquid in order to keep manufacturing costs low.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone






If you want a physical keyboard then the Droid is your only choice in the list. The keys on the Droid keyboard are basically flush so you don't get the comfortable key separation feel on a Blackberry keyboard. The others (Droid as well) have virtual keyboards which work in portrait or landscape mode.

Droid Slide-out keyboard

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone







The Xperia X10 is one of the best camera phones. Sony used it's camera know how for their new smartphone lineup and it will be hard to match-up against Sony unless the other guys partner up with someone like Canon. The X10 comes with an 8.1 mp camera with X 16 digital zoom. The software has also been changed from standard Android to include typical camera options. Also included are a four face detection feature that recognizes faces in a photo and appropriately tags/files the photo. Motorola Droid comes in with a 5 mp camera with X4 digital zoom compared to the 5mp and x2 digital zoom on the Nexus One.

Xperia X10 sample photo

***Additional Photos***

Motorola Droid sample photo

Nexus One sample photo

Acer Liquid sample photo

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone







X 2

X 4





Y (dual)




Video wise, the Nexus One, Motorola Droid and Xperia will perform roughly the same.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Video Res.











Lightest and thinest is the Nexus one. Motorola is weighed down by the metal used. They all are roughly the same size as the iPhone 3Gs which comes in at 115.5 x 62.1 x 12.3 mm and weighs 135g.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Height (mm)





Width (mm)





Depth (mm)






Weight (g)






OS Level

Nexus One has the most current OS level at 2.1. Motorola Droid is expected to upgrade soon as well as the Acer Liquid. The heavily customized Xperia X10 will be more of a challenge to upgrade to 2.1 because of the heavy customization.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone






Xperia X10 shines as far as demonstrating how customizable Android really is. The other 3 phones have very few changes to the standard Android OS.

Sony TimeScape/MediaScape

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone



Rachael UI

Acer UI 3.0

Application Market

We are likely to see more App market emerge. Sony currently leads the way and Motorola and HTC (Nexus One) will follow suit as well.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone

Android Market

Android Market

PlayNow, Android Market

Android Market


Mediascape is an ambitious effort to add decent media functionality to Android. Sony succeeds and also introduces a fun way to organize your media. Acer has Spinlet which is not as complex as Mediascape.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone





Social Networking

Sony again leds the customization way with Timescape. This is another good job by Sony to add extra functionality to Android. Timescape helps manage your contacts better and brings social networking and contacts onto one application.

Nexus One

Motorola Droid

Sony Xperia X10

Acer Liquid

Archos Phone





          Ø¥Ø·Ù„اق Ubuntu 16.04        
إطلاق Ubuntu 16.04

وأخيرا أعلنت شركة كانونيكال عن إطلاق سادس إصدارة طويلة الدعم من نظامها أوبنتو.جاءت أوبنتو 16.04 بالعديد من المميزات الجديدة والتي تركز في أغلبها نحو تقنيات الخوادم و الحوسبة السحابية وإنترنت الأشياء.

فمن أهم مميزات هذه النسخة هي:
- تقديم صيغة جديدة لتوزيع البرمجيات وهي صيغة snap والتي تتميز بأنها آمنة و قوية.
- تقديم تقنية ZFS لأنظمة الملفات وهي تقنية متطورة جدا لإدارة الملفات واسترجاعها يمكنك التعرف على المزيد حولها في هذه المقالة.
- إضافة LXD كمدير خالص للحاويات لنظام OpenStack Mitaka
- دعم أنظمة  IBM Z and و LinuxONE من شركة أي بي أم.

أما من ناحية سطح المكتب فلم تطرأ الكثير من التحديثات الجذرية ولعل أهم المميزات الجديدة:
- استبعاد مركز برمجيات أوبنتو والاستعاضة عنه بمركز برمجيات جنوم.
- إمكانية نقل شريط يونتي إلى الأسفل.
- تحسين دعم الشاشات عالية الوضوح HiDPI.

من ناحية إصدارات البرمجيات فكالعادة تأتي هذه النسخة بآخر التطبيقات المستقرة وأهمها:
- نواة لينكس ٤.٤
- بايثون ٣.٥
- ليبر أوفيس ٥.١
- تطبيقات بيئة جنوم ٣.١٨

مشتقات أوبنتو ١٦.٤

صدرت كذلك مشتقات أوبنتو الرسمية وهي  Ubuntu MATE وKubuntu وXubuntu وLubuntu والفروقات عادة هي في نوعية سطح المكتب المستخدم بدلا عن سطح مكتب يونتي.


المميز في هذه الإصدارة وكعادة الإصدارات طويلة الدعم هو توفير الدعم الرسمي من شركة كانونيكال لمدة خمس سنوات بشكل مجاني من تاريخ صدور التوزيعة. وهذا يجعل هذه الإصدارة هي التي ينصح بها خلال السنتين القادمتين حتى صدور ١٨.٠٤.

تحميل أوبنتو ١٦.٤

تتوفّر أوبنتو على شكل أكثر من إصدار، واحد لسطح المكتب وآخر للخواديم وآخر للحوسبة السحابية، والأخير للمطورين، يمكنك اختيار الإصدار الذي يناسبك والمعمارية التي تناسب جهازك كذلك عبر زيارة الموقع الرسمي.

Fahad السبت, 2016/04/23 - 2:44م

          CD-i 180 internals        
In the previous post I promised some ROM and chip finds. Well, here goes. To understand some of the details, you'll need some microprocessor and/or digital electronics knowledge, but even without that the gist of the text should be understandable.

The CDI 181 MMC unit contains the so-called Maxi-MMC board that is not used in any other CD-i player. Its closest cousin is the Mini-MMC board that is used in the CD-i 605 and CD-i 220 F1 players (a derivative of it is used in the CD-i 350/360 players).

The Mini-MMC board uses two 68HC05 slave processors for CD and pointing device control (they are usually called SERVO and SLAVE). The Maxi-MMC board does not have these chips, but it does have two PCF80C21 slave processors labeled RSX and TRANSDUCER that perform similar functions.

From their locations on the board I surmise that the RSX performs CD control functions; I know for sure that the TRANSDUCER performs only pointing device control. The latter is connected to the main 68070 processor via an I2C bus (I've actually traced the connections); I'm not completely sure yet about the RSX.

In order to emulate the pointing devices in CD-i Emulator, I had to reverse engineer the I2C protocol spoken by the TRANSDUCER chip; this was mostly a question of disassembling the "ceniic" and "periic" drivers in the ROM. The first of these is the low-level driver that serves as the common control point for the I2C bus; the second is the high-level driver that is instantiated separately for each type of pointing device. The ROMs support three such devices: /cdikeys, /ptr and /ptr2, corresponding to the player control keys and first and second pointing devices (the first pointing device is probably shared between the infrared remote sensor and the left pointing device port). Both pointing devices support absolute (e.g. touchpad) as well as relative (e.g. mouse) positioning.

Note that there is no built-in support for a CD-i keyboard or modem (you could use a serial port for this purpose).

However, knowing the I2C protocol does not tell me the exact protocol of the pointing devices, which therefore brings me no closer to constructing a "pointing device" that works with the two front panel MiniDIN-9 connectors. Note that these connectors are physically different from the MiniDIN 8 pointing device connectors used on most other CD-i players. According to the Philips flyers, these connectors have 6 (presumably digital) input signals and a "strobe" (STB) output signal. From the signal names I can make some educated guesses about the probable functions of the signals, but some quick tests with the BTN1 and BTN2 inputs did not pan out and it could be too complicated to figure out without measurement of a connected and working pointing device.

There is, however, also an infrared remote sensor that is supposed to expect the RC5 infrared signal protocol. This protocol supports only 2048 separate functions (32 groups of 64 each) so it should not be impossible to figure out, given a suitably programmable RC5 remote control or in the best case a PC RC5 adapter. I've been thinking about building one of the latter.

There is also a third possibility of getting a working pointing device. Although the case label of the front MiniDIN 8 connecter is "CONTROL", the Philips flyers label it "IIC" which is another way of writing "I2C", although they don't give a pinout of the port. It seems plausible that the connector is connected to the I2C bus of the 68070, although I haven't been able to verify that yet (the multimeter finds no direct connections except GND, so some buffering must be involved). If there is indeed a connection, I would be able to externally connect to that bus and send and receive the I2C bus commands that I've already reverse engineered.

If even this doesn't work, I can always connect directly to the internal I2C bus, but that involves running three wires from inside the player to outside and I'm not very keen on that (yet, anyway).

Now, about the (so far) missing serial port. There is a driver for the 68070 on-chip UART in the ROMs (the u68070 driver which is accessible via the /t2 device), and the boot code actually writes a boot message to it (CD-i Emulator output):
  PHILIPS CD-I 181 - ROM version 23rd January, 1992.
Using CD_RTOS kernel edition $53 revison $00
At first I thought that the UART would be connected to the "CONTROL" port on the front, but that does not appear to be the case. Tonight I verified (by tracing PCB connections with my multimeter) that the 68070 serial pins are connected to the PCB connector on the right side (they go through a pair of SN75188/SN75189 chips and some protection resistors; these chips are well-known RS232 line drivers/receivers). I even know the actual PCB pins, so if I can find a suitable 100-pins 0.01" spaced double edge print connector I can actually wire up the serial port.

Now for the bad news, however: the ROMs do not contain a serial port download routine. They contain a host of other goodies (more below) but not this particular beast. There is also no pointing device support for this port, contrary to all other players, so connecting up the serial port would not immediately gain me anything, I still need a working pointing device to actually start a CD-i disc…

There are no drivers for other serial ports in the ROMs, but the boot code does contain some support for a UART chip at address $340001 (probably a 68681 DUART included in the CDI 182 unit which I don't have). The support, however, is limited to the output of boot messages although the ROMs will actually prefer this port over the 68070 on-chip device if they find it.

As is to be expected from a development and test player, there is an elaborate set of boot options, but they can only be used if the ROMs contain the signature "IMS-TC" at byte offset $400 (the ROMs in my player contains FF bytes at these locations). And even then the options prompt will not appear unless you press the space bar on your serial terminal during player reset.

However, adding a -bootprompt option that handles both the signature and the space bar press to CD-i Emulator was not hard, and if you use that option with the 180 ROMs the following appears when resetting the player:
  PHILIPS CD-I 181 - ROM version 23rd January, 1992.

A-Z = change option : <BSP> = clear options : <RETURN> = Boot Now

Boot options:- BQRS
As specified, you can change the options by typing letters and pressing Enter will start the boot process with the specified options.

From disassembling the boot code of the ROMs I've so far found the following options:

D = Download/Debug
F = Boot from Floppy
L = Apply options and present another options prompt (Loop)
M = Set NTSC Monitor mode
P = Set PAL mode
S = Set NTSC/PAL mode from switch
T = Set NTSC mode
W = Boot from SCSI disk (Winchester)

It could be that there's also a C option, and I've as yet not found any implementations of the Q and R options that the ROMs include in the default, but they could be hidden in OS-9 drivers instead of the boot code.

Once set, the options are saved in NVRAM at address $313FE0 as default for prompts during subsequent reboots, they are not used for reboots where the option prompt is not invoked.

Options D, F and W look interesting, but further investigation leads to the conclusion that they are mostly useless without additional hardware.

Pressing lower-case D followed by Enter / Enter results in the following:
Boot options:- BQRSd
Boot options:- BDQRS
Enter size of download area in hex - just RETURN for none
called debugger

Rel: 00000000
Dn: 00000000 0000E430 0007000A 00000000 00000000 00000001 FFFFE000 00000000
An: 00180B84 00180570 00313FE0 00410000 00002500 00000500 00001500 000014B0
SR: 2704 (--S--7-----Z--) SSP: 000014B0 USP: 00000000
PC: 00180D2E - 08020016 btst #$0016,d2
One might think that entering a download size would perform some kind of download (hopefully via the serial port) but that is not the case. The "download" code just looks at location $2500 in RAM that's apparently supposed to be already filled (presumably via an In-Circuit Emulator or something like it).

However, invoking the debugger is interesting in itself. It looks like the Microware low-level RomBug debugger that is described in the Microware documentation, although I haven't found it in any other CD-i players. One could "download" data with the change command:
debug: c0
00000000 00 : 1
00000001 00 : 2
00000002 15 : 3
00000003 00 :
Not very userfriendly but it could be done. The immediate catch is that it doesn't work with unmodified ROMs because of the "IMS-TC" signature check!

Trying the F option results in the following:
Boot options:- BQRSf
Boot options:- BFQRS
Booting from Floppy (WD 179x controller) - Please wait
This, however, needs the hardware in the CDI 182 set (it lives at $330001). I could emulate that in CD-i Emulator of course, but there's no real point at this time. It is interesting to note that the floppy controller in the CD-i 605 (which I haven't emulated either at this point) is a DP8473 which is register compatible with the uPD765A used in the original IBM PC but requires a totally different driver (it also lives at a different memory address, namely $282001).

Finally, trying the W options gives this:
Boot options:- BQRSw
Boot options:- BQRSW
Booting from RODIME RO 650 disk drive (NCR 5380 SCSI) - Please wait
Exception Error, vector offset $0008 addr $00181908
Fatal System Error; rebooting system
The hardware is apparently supposed to live at $410000 and presumably emulatable; it's identical or at least similar to the DP5380 chip that is found on the CD-i 605 extension board where it lives at $AA0000).

Some other things that I've found out:

The CDI 181 unit has 8 KB of NVRAM but it does not use the M48T08 chip that's in all other Philips players, it's just a piece of RAM that lives at $310000 (even addresses only) and is supported by the "nvdrv" driver via the /nvr device.

In the CD-i 180 player the timekeeping functions are instead performed by a RICOH RP5C15 chip, the driver is appropriately called "rp5c15".

And there is a separate changeable battery inside the case; no "dead NVRAM" problems with this player! I don't know when the battery in my player was last changed but at the moment it's still functioning and had not lost the date/time when I first powered it on just over a week ago.

The IC CARD slot at the front of the player is handled like just another piece of NVRAM; it uses the same "nvdrv" driver but a different device: /icard. According to the device descriptor it can hold 32 KB of data, I would love to have one of those!
          Micro Blog Tweet Points to “Micro V-Card” Word Clouds        

I love words, and Sam Lawrence reminded me of just how much when he Twittered an experiment he cleverly ran with Many Eyes . Sam provided visualizations of the way the datasets of some of the top bloggers he was interested in tracking, produced word clouds which in turn provided a glimpse into the psyches or at least current passions of the writer/blogger/thought leaders whose words he analyzed and visualized.

Well, a clever stratagem and one that worked for Sam, not only in allowing folks to take a critical look at how “loudly” certain words played in their own or others’ conversations, but also by brilliantly encouraging a further look into the work he is obviously doing well in dissecting patterns in cross-departmental collaboration Words were the bait, further ego-boo was the absolute dragnet. Almost ½ of the “thought leaders” Sam used in his experiment responded with comments in his blog post and many I’m sure took more than a glance at his previous posts and findings.

And speaking of appealing to vanity, I was quite shocked and rather overwhelmed to see myself in the next round of experiments, this time focused on women, as the first round seemed to be a “male only thing” as a few of us noticed. I enjoyed learning more about the women Sam selected and liked the diversity that his choices highlighted. Whereas the men had a common theme of PEOPLE, the women, had more eclectic word choices.

In addition, I noticed a number of folks in twitterdom running their own self-analysis. For example, I watched in envy as Goldie Katsu whose tweets I enjoy, created her own Goldie’s Gabs cloud. And it wasn’t any surprise to find People, Video, Life prominent in her most used words. (I have been frustrated in my attempts to register to the Many Eyes website, to create my own visualization and will have to contact IBM support it says)

Some of the loudest words surfacing might have been the result of the topical nature of some of the datasets. For example: Yahoo in the news probably created disproportionate Yahoo word cloud size, as someone commented. Since I am a relatively new blogger and not nearly as prolific or notable as Sam’s other women choices, my list was based on a data source of only 4,000 some odd words as contrasted to the 45,000 word sampling of the likes of Kara Swisher.

But regardless, it would indeed be a marvelous calling card to have our most oft used words associated with our profiles. I’m sure it is only a matter of time until someone supplies a widget to make that ability dynamic in blog posts.

For those familiar with the Strength Finder signature themes or those who once took the time and invested $35 to discover their strengths online (or as in my case many years ago, had their company invite them to participate in such analysis) it might be interesting to correlate the word themes, with the signature themes. I randomly checked out this gentlemen Steve Borsch

Steve Borsch's signature themes matched mine almost 1-to-1 (the exception was his Woo strength replacing my Empathy theme), but out of 34 possibilities Ideation, Learner, Strategic, and Input were aligned as themes.

Here's the interesting part for me. When I quickly flipped through his post and biography I found so many common touch points between us. Without knowing a great deal about him, I would even dare to say there were numerous similarities in some of the demographics: age, passions, interests, work experiences, family composition, at least by what I could glean by briefly reading about his interests.

I'd love to see how the "personality" type mappings align or contrast to Sam's micro v-card idea,as Sam calls it, which is boiling us down to our signature words as well as themes.

By the way, this exercise also evoked memories of a childhood passion I had for just sitting down with our 1963 Funk & Wagnalls and skimming the pages. (We couldn’t afford the more expensive Britannica and our’s was the supermarket “buy a volume a week” version with few pictures and lots of words). I wonder now how closely the words that spoke the loudest to me then, in those childhood moments, found their way into my present vernacular.

A big thanks to Sam for catalyzing all these activities and firing up these thoughts.

          IBM compatible PC        
where are IBM in the PC industry now? They allowed clones, in the end they lost out. Apple's only route had they continued would have been to spin off the OS devision so that that could have saved.
          RE: IBM compatible PC        
IBM didn't "allow" clones. They figured the only way they could bring a PC to market in the time frame they wanted was to use off-the-shelf parts, but kept the BIOS secret to try and force lock-in. Compaq opened the doors to the clone market by reverse engineering the BIOS and the rest is history.
          RE[2]: IBM compatible PC        
Since IBM published the source to the BIOS in the back of the Manual... No, they didn't keep it secret. And to copy the BIOS and put it into a Taiwanese Clone Motherboard was as easy as burning an EPROM. I know, because I worked for a company that did just that. I left, because I couldn't deal with the blatant theft.
          I miss Apple's fantastic mechnical keyboards...        
What I really miss now are the fantastic Apple mechanical keyboards...the magnicificent Apple Extended Keyboard I/II and the funky and geeky Apple IIGS. The best keyboard ever produced -- with except of IBM model M line -- I believe!!! With venerable Apls mechanical keyswithes, a simple but graceful design and a durable steel plating installed, the ol' Apple keyboards are the typists' most ideal dream...and that subliminated key top print never become worn out like cheap silk-screen or laser engraving todays cheap keyboards have. Wow, I really sound like a keyboard geek! :-)
          RE: IBM compatible PC        
Even although they have now left the market, I argue that they have done better out of it than they ever would have had they kept the architecture closed. The open architecture stole market share, but it drove down prices, allowing the less well-off to buy machines and therefore increase the size of the market in total (99% of 100 is 99, but 58% of 200 is 116 - lower marketshare, but bigger numbers). Indeed, it seems that keeping technology closed is the best way of eventually pricing/driving your technology (or worse, your company) out of a market. All companies die eventually, but it happens even quicker with proprietary stuff. The list of companies that have made this mistake and been killed off (or had their technologies killed off) is legion. Apple (got *near* it several times, and was arguably saved only by buying more and more off-the-shelf parts) Commodore Atari Digital Data General IBM (yes, even using proprietary technology, IBM saw strong competition from clone-makers (Amdahl) and companies exploiting new markets (DEC, Digital) Apollo HP (with MPE and then PA-RISC) Note that in several cases, having a *better* technology (cf Amigas with 1990 PC-compatibles, or IRIX versus Windows) did nothing to save the company in the long run. Producing a *generic* product - that sells! The only things that will save you and/or your technology is (a) being open, intentionally or otherwise (IBM, AMD, AT&T (Unix)), (c) being massively entrenched (guess who), or (c) corporate welfare. Sooner or later the movers and shakers of the buyers' market move on, and if you don't compete with a compatible product, You're Dead.
          RE[2]: IBM compatible PC        
And the Acorn Risc architecture of course.........
          RE[3]: IBM compatible PC        
Phenix reverse engineered the IBM ROM which was NOT published. What was published was the supervisory calls (software interrupts) - the API. This is almost all that was needed to reverse engineer a compatible BIOS. They did intend to keep it secret, but because they only included the the most basic hardware interfacing code in the ROM (about 2K of assembly) this was a much easier task than reverse engineering the Apple ROM that included much of the OS (128K as I remember it).
          RE[2]: IBM compatible PC        
MS is a closed as you get. They go out of their way to make sure their products don't play nice with others.
          RE[3]: IBM compatible PC        
No-one who reads this website will tell you I was defending Microsoft! Far from it.
          [urls/news] The Global Grid: China's HPC Opportunity        
Thursday, November 11, 2004
Dateline: China
For this posting, I'm using an annotated urls format.  Let's begin.
The Global Grid
Grid computing.  HPC (high-performance computing).  Lots of trade press coverage.  Lots of academic papers.  Generally, this is a GREAT convergence.  Didn't hold with AI (artificial intelligence), but the coverage of grid computing is much more pervasive.  Also, it's an area where I believe that systems integrators (SIs) in China can play with the globals.  It's new enough that there are no clear leaders.  Okay, maybe IBM is a clear leader, but it's certainly not an established market.
It's also a market where Chinese SIs can leverage work done for domestic applications for Western clients.  This is NOT true in areas such as banking applications; the apps used in China are very different from the apps used in the States.  Fundamentally different systems.  But a lot of grid work is more about infrastructure and custom development.  There's also a lot of open source in the grid sphere.
I've selected some of the best papers and sites for review.  This is certainly not meant to be comprehensive, but simply follow the links for more info.
One last note:  Clicking on any of the following links will likely lead you to an abstract and possibly to some personal commentary not included in this posting.  You may also find related links found by other Furl users.
The "Bible" of the grid world.  The home page will lead to many other relevant papers and reports.  See also The Anatomy of the Grid (PDF).
Hottest journal issue in town!!  Papers may be downloaded for free.  See also Grid computing: Conceptual flyover for developers.
One of the better conferences; covers applications and provides links to several excellent papers and presentations.
Well, the link has been replaced.  Try to get a hold of this paper.  It WAS available for free.  SOA meets the grid.  The lead author, Liang-Jie Zhang, is a researcher at IBM T.J.Watson Research Center and chair of the IEEE Computer Society Technical Steering Committee (technical community) for Services Computing. Contact him at .  Ask for his related papers, too.
Several excellent papers; recent conference.  Middleware:  Yes, middleware is the key to SI opportunities.
Conference held earlier this month!!  See who is doing what in China.
Want a competitive edge in the grid space?  This is it!!
NOTE:  A search for "grid computing" in my Furl archive yields 164 hits (and most are publicly searchable).  See .
Other News
Outsourcing & Offshoring:
I don't agree with this, but it's worth reading, especially considering the source.  I agree that China shouldn't try to be a clone of India, but the arguments in support of the domestic market don't consider margins.
I'll be writing a column for the AlwaysOn Network about the disconnect between China's foreign policy initiatives and the realities of the IT sector.  Suffice it to say that SIs in China should NOT chase after the EU.  Again, do NOT confuse foreign policy with corporate policy!!
More of the same.  Read my comments about Romania by clicking the link ...
Google is coming to China, too.  Think MS Research in Beijing.
Another great move by IBM; they're clearly leading the pack.
This article is a bit confusing.  I suspect that TCS is simply copying the IGS China strategy.  But it's worth noting that they're moving beyond servicing their American clients with a presence in China.
Yes, yes and yes.  Expect a lot more of this.  I wouldn't be surprised to see China's SIs forced to move a bit lower on the U.S. SI food chain for partnerships.  Move up the chain by thinking verticals!!
No need to click; it's all about security.
No, not really a new model; more about a new certification!!  Just what the world needs ...
Enterprise Software:
The title says it all.
Maybe the "P" in "LAMP" should stand for "Plone"?
A strategy for USERS, i.e., SIs in China.
Marketing & Management:
Product Management 101, courtesy of the Harvard Business School.
Spread this throughout your organization ... and then ramp up with some paid tools.
SCM (supply chain management) meets marketing, but with a general management and strategy slant.
G2 planning strategies.  A wee bit mathematical, but still fairly easy to follow.
Expect the next original posting in two or three weeks; my next column for the AlwaysOn Network will be sent to this list.  Off to HK/SZ/ZH/GZ next week.
David Scott Lewis
President & Principal Analyst
IT E-Strategies, Inc.
Menlo Park, CA & Qingdao, China (current blog postings optimized for MSIE6.x) (access to blog content archives in China) (current blog postings for viewing in other browsers and for access to blog content archives in the US & ROW) (AvantGo channel)
To automatically subscribe click on .

          [news] Grudge Match: China vs. Europe + "It's Malaysia Time ..."        
Tuesday, September 7, 2004
Dateline: China
This week marks the debut of my bi-weekly (or so) column for the AlwaysOn Network, Silicon Valley's premier online social networking venue (and unofficially linked to Silicon Valley's premier in person social networking venue, the Churchill Club; I'm a member of both).  I will be sharing "Letter from China" columnist duties with Paul Waide, the head of Pacific Epoch, a Shanghai-based boutique consultancy that advises hedge funds on alternative investments in China.  My first column is on Shanghai and a couple/few forthcoming columns will examine cultural differences between Chinese Nationals, Chinese-Americans and Anglo-Americans, especially within the context of IT and IT marketing.  I will post my AlwaysOn "Letter from China" columns to this blog/e-newsletter, although please be advised that my intended audience are readers based in Silicon Valley.
Grudge Match: China vs. Europe
Staying on topic, I'd like to make a comment about a recent "Grudge Match" on the AlwaysOn Network.  See the item marked "Grudge Match" for 08.05.04 (5 August 2004) at .
In the referenced "Grudge Match," China was pitted against Europe.  China received 45% of the votes in contrast to Europe's 55%.  Frankly, I'm surprised that China did so well.  I've found that the AO "Grudge Match" results tend to indicate sentiment more so than reality.  For example, a recent match pitted SpaceShipOne against NASA and SSO absolutely clobbered NASA (besides, perhaps most of the votes for NASA came from either Ames or the Blue Cube).  Of course, SSO is a high school science experiment compared to what NASA is doing, but I believe the results accurately reflect sentiment. 
But what is amazing (to me, at least) is that China was pitted against Europe in the first place!  Let's face it, this is a rather goofy "grudge match."  For Europe to include First World nations such as Germany, France, the U.K., Ireland, Italy, Switzerland, the Netherlands, Belgium, Sweden, Finland, Norway, Denmark (yes, some countries are intentionally left out) -- and to compare the collective whole of First World Europe (a.k.a. "Western Europe") to China is absurd.  If this was First World Europe vs. China circa 2020, okay.  But TODAY?  Yet, the sentiment indicator showed a strong vote in favor of China.  Europe "won," but barely.
I propose the following "grudge match":  China vs. "Eastern Europe" (i.e., the former Soviet Bloc).  Look, if China can do so well against Europe as a whole (including First World Europe), I'm sure China would absolutely kick Second World Europe's butt!!  And a China "grudge match" against Eastern Europe more accurately reflects current "history."
But even this is a bit misleading.  The real "grudge match" is this:  China + India vs. Second World Europe.  And given this choice, only someone stranded on Mars for the past decade might choose Second World Europe.  Yet, this is the real so-called "grudge match."  First World Europe is in descent, to be sure, but it's descending from a high altitude.  It will take at least a decade or two for China (and/or India) to truly match First World Europe.  But China ALREADY is superior to Second World Europe.  And don't rant about NATO and EU memberships; this is simply window dressing.  Then combine China with India versus Second World Europe, playing into my "Golden Triangle" theme, i.e., it's all about the U.S., India and China.  This is where the action is, ESPECIALLY in IT.
"It's Malaysia Time ..."
I must be getting punchy since I'm borrowing a theme from a beer commercial, but it seems that Malaysia is experiencing its 15 minutes of fame.  The Philippines has recently been "hot," and several articles of late have been touting Malaysia (see, for example, an article which appeared in Space Daily).  Frankly, I'm getting tired of all this nonsense.  Look, when it comes to ITO (IT outsourcing) in East Asia, there are just two choices, i.e., India and China.  And, it's not really a competition; both have their strengths and weaknesses.  A few crumbs to Singers (Singapore), maybe even a few crumbs to the Kiwis (New Zealand).  The Philippines deserves notice, albeit passing notice, and Malaysia might be okay for some BPO.  But ITO?  Come on, give me a break!!  See my Furl archive for more links.
The only thing I recently found interesting regarding Malaysia was an article on Satyam's IT boot camp in Malaysia.  This isn't really unique, after all, IBM has been doing this sort of thing for decades.  So does HP.  Kind of like training plus a bit of brainwashing, but the brainwashing is acceptable since it includes political survival skills -- and said skills are essential, especially in F500 corporations.  But I like the idea of SI (systems integrator)-based training:  This way SIs can focus on "real" versus theoretically perceived needs.
IT Tidbits
Which certifications have the best ROI (return on investment)?  Playing off the idea of SI-based training, which are the most important certifications?  Well, Cisco leads with three out of the top five, although Microsoft picks up a couple of "wins" when looking at fastest-growing ROI, with RedHat and Oracle getting one win each.  SIs in China may also want to benchmark how much U.S. employees are paid given a certain certification, e.g., Microsoft DBAs receive an annual average salary of US$80,600.  Think about how much SIs in China pay for a certified Microsoft DBA.  For example, what do they get paid in Jinan -- or even in Dalian?  Compare this to US$80,600.  Spot any opportunities?  See and .
ITO in the news.  Two particularly noteworthy items.  First, ITO got Slashdotted.  The Slashdot links are worth a review.  Probably some good insight into what American software engineers are thinking and feeling.  The second is a review of Lou Dobbs' new book on ITO and BPO.  Mr. Dobbs is a well-respected host on CNN; his views shouldn't be taken lightly.  A couple of excerpts from the review:
"GE, as Dobbs makes clear in abundant detail, is only one of many companies outsourcing high-tech and professional jobs to India and other parts of the world where wage expectations are lower.  Among the others spotlighted by Dobbs for outsourcing jobs to India, the Philippines, Romania, Ireland, Poland and other countries are IBM, SAS Institute, Intel, Microsoft, Perot Systems, Apple, Computer Associates, Dell, Hewlett-Packard, Oracle and Sun Microsystems."  My comment:  Romania is the Changsha of Third World Europe, i.e., their programmers are about as cheap as programmers come.
"'India can provide our software; China can provide our toys; Sri Lanka can make our clothes; Japan make our cars.  But at some point we have to ask, what will we export?  At what will Americans work?  And for what kind of wages?  No one I've asked in government, business or academia has been able to answer those questions,' Dobbs writes."  See the review in the Tallahassee Democrat or my Furl link .
So-called infrastructure vendors beat out app vendors in terms of their ability to meet expected ROI and TCO (total cost of ownership) levels.  I don't really like the way infrastructure and application vendors are defined in this article and related survey, but top honors go to IBM and Microsoft.  There's a lot being written between the lines, but in general this plays into my "build-to-a-stack" strategy, albeit Oracle is left behind.  See .
Speaking of Microsoft ...  A good, quick review of the various IBUs (independent business units) at Microsoft.  (See .)  For a take on MBS, see .
New marketing technologies.  Interesting article from the premier issue of CMO (Chief Marketing Officer).  There are two ways to view this:  1) which marketing technologies can be used by SIs in China for their own marketing endeavors, and 2) which marketing technologies will likely be adopted by retailers, e-commerce sites, financial institutions and numerous other sectors -- and which in house skills does an SI in China need to implement these new technologies (all of which are IT-related)?  See .
Looking for partners in the utility computing space?  For a start, try the top 25 vendors.  (See .)  Yankee gives a quick look at utility computing ROI (see ).  HP chimes in with their take, too (see ; it's a PDF).
The battle of the SI globals.  Two related articles both based on the same Forrester report.  (See and .)  Issues being considered include scalability (i.e., handling US$100+ million accounts), the need for broad offerings (e.g., strategy consulting) and expanding geographical presence (hey, where is EDS in China?).   "(T)he (Forrester) study finds that Infosys and Wipro have melded together a mix of CMMI, P-CMM, Six Sigma and ISO 9000 to create a culture focused on consistent and repeatable processes and value-added tools."  For China's SIs, mostly food for thought -- and a bit of dreaming.
... and how to battle the globals.  The article was a bit silly, after all, G2000 firms joining forces to battle Accenture or Infosys doesn't really fit the notion of smaller firms joining forces.  But I believe that they're on the right track and that a myriad of partnerships will be formed to most effectively capture new business and battle the globals.  However, ISVs (independent software vendors) have to walk a very fine line.  SIs need to carefully consider ISV responses and existing alliances.  See .
"Infosys to set up second outsourcing facility in China."  The article states that Infosys is running out of space in their Pudong facility and that they're scouting for additional digs.  Come on, guys, running out of space?  There's not enough space in the Shanghai Pudong Software Park?  I don't think so ...  The reality is that Infosys needs to find lower cost developers.  As my column on Shanghai for AO's "Letter from China" notes, developers in Shanghai are a bit pricey compared to other places in China.  Infosys China is primarily servicing their global customers in China and looking for high-end integration within the domestic market.  However, this is a tough nut to crack and Infosys will need another development center to lower their overall costs -- and this is why they are looking for additional space IN ANOTHER CITY.  The idea that they're running out of space in the SPSP is ridiculous.  (I've been to their Shanghai digs ...)  See .
Zensar gets broader press coverage.  Kind of like watching a meme, a couple of non-Indian IT trades have picked up the Zensar/Broadengate announcement.  See and .
"Rethinking the business case for Java."  A good article.  Hmmm ... maybe not much of a case, eh?    Hey, I'm still a believer.  See .  Of course, Java programming ain't what it used to be ...
"The selling of SOA."  Two-part series in Line56.  SUPERB!!  (I prefer the singular to the plural, i.e., "architecture" versus "architectures"; personal preference.)  Reviews various viewpoints on SOA.  See and .
Urls update.  Expect to see lots and lots of stuff on software engineering and development.  Great stuff, too!!  Later this week.
David Scott Lewis
President & Principal Analyst
IT E-Strategies, Inc.
Menlo Park, CA & Qingdao, China (current blog postings optimized for MSIE6.x) (access to blog content archives in China) (current blog postings for viewing in other browsers and for access to blog content archives in the US & ROW) (AvantGo channel)
To automatically subscribe click on .

          [urls] Web Services Differentiation with Service Level Agreements        
Wednesday, September 1, 2004
Dateline: China
The following is a sampling of my top ten "urls" for the past couple/few weeks.  By signing up with Furl (it's free), anyone can subscribe to an e-mail feed of ALL my urls (about 100-250 per week) -- AND limit by subject (e.g., ITO) and/or rating (e.g., articles rated "Very Good" or "Excellent").  It's also possible to receive new urls as an RSS feed.  However, if you'd like to receive a daily feed of my urls but do NOT want to sign up with Furl, I can manually add your name to my daily Furl distribution list.  (And if you want off, I'll promptly remove your e-mail address.)
Top Honors:
* Web Services Differentiation with Service Level Agreements, courtesy of IBM T.J. Watson; as the title suggests, this paper tackles SLAs.  See also Web Services QoS: External SLAs and Internal Policies, by the same author.  The latter paper was the invited keynote at the 1st Web Services Quality Workshop (this site provides links to abstracts for all the workshop papers as well as links to each author's personal site).
Other best new selections (in no particular order):
* Product Focused Software Process Improvement: PROFES 2004 (if you're going to read only one tech book this year, let it be this!!)
* Legacy systems strike back!!  We all know that there is a good market in servicing legacy systems.  See the following: Arriba: Architectural Resources for the Restructuring and Integration of Business Application (an introduction), Identifying Problems in Legacy Software, and Evolution of Legacy Systems.  
* Online Communities in Business: Past Progress, Future Directions, Five Keys To Building Business Relationships Online and Advantages of Using Social Software for Building Your Network.  (I can say with a fairly high level of confidence that these tools can be used to expand your business network.  Been there, done that.  Give it a try.  Do I already know you and would you like an invitation to join LinkedIn?  If the answer to both questions is "yes," let me know ...)
* Carnegie Mellon Project Aura Video (gets a bit silly at times, but the language translation component was interesting to see; the R-T example is still years away, but the idea is intriguing and this is where collaboration tools need to go)
* Innovation: Strategy for Small Fish (from the Harvard Business School; however, NVIDIA would not have been my choice for a case study)
* Stata Labs: Managing at a Distance, for Less (a pretty good case study; I firmly believe that China's systems integrators/contract developers need world-class collaboration tools and this describes one of the formats I support)
* An Authoring Technology for Multidevice Web Applications (one of my favorite topics -- and an area where I believe SIs in China can take the lead)
* Cheapware (or, "Changsha Gone Wild!!"; hey Qilu clan, are you listening?  Go, Ding, go!!)
* How To Team With A Vendor (a "must read" -- and evidently a lot of my readers already did, even though I only made a passing reference in a previous posting)
Examples of urls that didn't make my "Top Ten List":
> ITU Internet Reports 2004: The Portable Internet (looks like this might be a great series; less biased than the typical IT advisory services report -- and a much better value, too)
> Software Cost Reduction (courtesy of the <U.S.> Naval Research Lab, this paper is a bit dated, but still worth reading; addresses problems with large-scale systems, albeit a bit light on practical examples) 
> Japan IT Outsourcing 2004-2008 Forecast: IDC (might be a worthwhile purchase, especially for the Dalian-based systems integrators)
> The Power of No (Linux as a bargaining tool <see my Furl comments, too>; make Microsoft shake in their boots!!)
> Web Design Practices (a good reference site)
and many, many more ...
David Scott Lewis
President & Principal Analyst
IT E-Strategies, Inc.
Menlo Park, CA & Qingdao, China (current blog postings optimized for MSIE6.x) (access to blog content archives in China) (current blog postings for viewing in other browsers and for access to blog content archives in the US & ROW) (AvantGo channel)
To automatically subscribe click on .

          [news] "2004 State of Application Development"        
Friday, August 13, 2004
Dateline: China
Special issues of journals and magazines are often quite good -- if you're into the subject matter.  But the current issue of VARBusiness is absolutely SUPERB!!  EVERY SYSTEMS INTEGRATOR SHOULD READ IT ASAP -- STOP WHAT YOU'RE DOING AND READ THIS ISSUE!!  (Or, at the very least, read the excerpts which follow.)  See .  They even have the survey results to 36 questions ranging from change in project scope to preferred verticals.  In this posting, I'm going to comment on excerpts from this issue.  My comments are in blue.  Bolded excerpted items are MY emphasis.
The lead article and cover story is titled, "The App-Dev Revolution."  "Of the solution providers we surveyed, 72 percent say they currently develop custom applications or tailor packaged software for their customers. Nearly half (45 percent) of their 2003 revenues came from these app-dev projects, and nearly two-thirds of them expect the app-dev portion of total revenue to increase during the next 12 months."  I view this as good news for China's SIs; from what I've observed, many SIs in China would be a good fit for SIs in the U.S. looking for partners to help lower their development costs.  "By necessity, today's solution providers are becoming nimbler in the software work they do, designing and developing targeted projects like those that solve regulatory compliance demands, such as HIPAA, or crafting wireless applications that let doctors and nurses stay connected while they roam hospital halls."  Have a niche; don't try to be everything to everyone.  "Nine in 10 of survey respondents said their average app-dev projects are completed in less than a year now, with the smallest companies (those with less than $1 million in revenue) finishing up in the quickest time, three months, on average."  Need for speed.  "The need to get the job done faster for quick ROI might explain the growing popularity of Microsoft's .Net framework and tools.  In our survey, 53 percent of VARs said they had developed a .Net application in the past 12 months, and 66 percent of them expect to do so in the coming 12 months."  My Microsoft build-to-their-stack strategy.  "Some of the hottest project areas they report this year include application integration, which 69 percent of VARs with between $10 million or more in revenue pinned as their busiest area.  Other top development projects center around e-commerce applications, CRM, business-intelligence solutions, enterprisewide portals and ERP, ..."  How many times have I said this?    "At the same time, VARs in significant numbers are tapping open-source tools and exploiting Web services and XML to help cut down on expensive software-integration work; in effect, acknowledging that application development needs to be more cost-conscious and, thus, take advantage of open standards and reusable components.  Our survey found that 32 percent of VARs had developed applications on Linux in the past six months, while 46 percent of them said they plan to do so in the next six months.  The other open-source technologies they are using today run the gamut from databases and development tools to application servers."  I guess there's really an open source strategy.  I come down hard on open source for one simple reason:  I believe that SIs in China could get more sub-contracting business from a build-to-a-stack strategy.  And building to the open source stack isn't building to a stack at all!!  "As a business, it has many points of entry and areas of specialization.  Our survey participants first arrived in the world of app dev in a variety of ways, from bidding on app-dev projects (45 percent) to partnering with more experienced developers and VARs (28 percent) to hiring more development personnel (31 percent)."  For SIs in China, simply responding to end-user RFQs is kind of silly.  Better to partner on a sub-contracting basis.  "According to our State of Application Development survey, health care (36 percent), retail (31 percent) and manufacturing (30 percent) ranked as the most popular vertical industries for which respondents are building custom applications.  Broken down further, among VARs with less than $1 million in total sales, retail scored highest, while health care topped the list of midrange to large solution providers."  Because of regulatory issues, I'm not so keen on health care.  I'd go with manufacturing followed by retail.  My $ .02.  "When it comes to partnering with the major platform vendors, Microsoft comes out the hands-on winner among ISVs and other development shops.  A whopping 76 percent of developers in our survey favored the Microsoft camp.  Their level of devotion was evenly divided among small, midsize and large VARs who partner with Microsoft to develop and deliver their application solutions.  By contrast, the next closest vendor is IBM, with whom one in four VARs said they partner.  Perhaps unsurprisingly, the IBM percentages were higher among the large VAR category (those with sales of $10 million or more), with 42 percent of their partners coming from that corporate demographic.  Only 16 percent of smaller VARs partner with IBM, according to the survey.  The same goes for Oracle: One-quarter of survey respondents reported partnering with the Redwood Shores, Calif.-based company, with 47 percent of them falling in the large VAR category.  On the deployment side, half of the developers surveyed picked Windows Server 2003/.Net as the primary platform to deliver their applications, while IBM's WebSphere application server was the choice for 7 percent of respondents.  BEA's WebLogic grabbed 4 percent, and Oracle's 9i application server 3 percent of those VARs who said they use these app servers as their primary deployment vehicle."  Microsoft, Microsoft, Microsoft.  Need I say more?  See .
The next article is on open source.  "Want a world-class database with all the bells and whistles for a fraction of what IBM or Oracle want?  There's MySQL.  How about a compelling alternative to WebSphere or WebLogic?  Think JBoss.  These are, obviously, the best-known examples of the second generation of open-source software companies following in the footsteps of Apache, Linux and other software initiatives, but there are far more alternatives than these.  Consider Zope, a content-management system downloaded tens of thousands of times per month free of charge, according to Zope CEO Rob Page.  Some believe Zope and applications built with Zope are better than the commercial alternative they threaten to put out of business, Documentum.  Zope is also often used to help build additional open-source applications.  One such example is Plone, an open-source information-management system.  What began as a fledgling movement at the end of the past decade and later became known as building around the "LAMP stack" (LAMP is an acronym that stands for Linux, Apache, MySQL and PHP or Perl) has exploded to virtually all categories of software.  That includes security, where SpamAssassin is battling spam and Symantec, too.  Popular?  Well, it has now become an Apache Software Foundation official project.  The use of open source is so widespread that the percentage of solution providers who say they partner with MySQL nearly equals the percentage who say they partner with Oracle"23 percent to 25 percent, respectively.There are plenty of choices for those SIs willing to play the open source game.  See .
"It's all about integration" follows.  "There are many reasons for the surge in application-development projects (the recent slowdown in software spending notwithstanding).  For one, many projects that were put on hold when the downturn hit a few years ago are now back in play.  That includes enterprise-portal projects, supply-chain automation efforts, various e-commerce endeavors and the integration of disparate business systems."  Choose carefully, however.  Balance this data with other data.  Right now, I see a lot more play with portals and EAI.  "Indeed, the need for quality and timely information is a key driver of investments in application-integration initiatives and the implementation of database and business-intelligence software and portals.  A healthy majority of solution providers say application integration is a key component of the IT solutions they are deploying for customers.  According to our application-development survey, 60 percent say their projects involved integrating disparate applications and systems during the past 12 months."  "Some customers are moving beyond enterprise-application integration to more standards-based services-oriented architectures (SOAs).  SOAs are a key building block that CIOs are looking to build across their enterprises."  Anyone who regularly reads any one of my three IT-related blogs knows that I'm gung-ho on SOAs.  "Even if your customers are not looking for an SOA, integrating different systems is clearly the order of the day.  To wit, even those partners that say enterprise portals or e-business applications account for the bulk of their business note that the integration component is key."  Yes, integration, integration, integration.  I'll be saying this next year, too.  And the year after ...  "Another way to stay on top of the competition is to participate in beta programs."  Absolutely true -- and a good strategy, too.  See .
The next article is on utility computing versus packaged softwareAgain, if you read what I write, you know that I'm also gung-ho on utility computing.  "According to VARBusiness' survey of application developers, more than 66 percent of the applications created currently reside with the customer, while 22 percent of applications deployed are hosted by the VAR.  And a little more than 12 percent of applications developed are being hosted by a third party.   Where services have made their biggest inroads as an alternative to software is in applications that help companies manage their customer and sales information.The article goes on to state that apps that are not mission-critical have the best chance in the utility computing space.  Time will tell.  Take note, however, that these are often the apps that will most likely be outsourced to partners in China.  "Simply creating services from scratch and then shopping them around isn't the only way to break into this area.  NewView Consulting is expanding its services business by starting with the client and working backward.  The Porter, Ind.-based security consultant takes whatever technology clients have and develops services for them based on need."   And focus on services businesses and .NET, too.  "Most application developers agree that services revenue will continue to climb for the next year or two before they plateau, resulting in a 50-50 or 60-40 services-to-software mix for the typical developer.  The reason for this is that while applications such as CRM are ideally suited to services-based delivery, there are still plenty of other applications that companies would prefer to keep in-house and that are often dependent on the whims of a particular company."  Still, such a split shows a phenomenal rise in the importance of utility computing offerings.  See .
Next up:  Microsoft wants you!!  (Replace the image of Uncle Sam with the image of Bill Gates!!)  Actually, the article isn't specifically about Microsoft.  "Microsoft is rounding up as many partners as it can and is bolstering them with support to increase software sales.  The attitude is: Here's our platform; go write and prosper.  IBM's strategy, meanwhile, is strikingly different.  While it, too, has created relationships with tens of thousands of ISVs over recent years,  IBM prefers to handpick a relatively select group, numbering approximately 1,000, and develop a hand-holding sales and marketing approach with them in a follow-through, go-to-market strategy."  Both are viable strategies, but NOT both at the same time!!  "To be sure, the results of VARBusiness' 2004 State of Application Development survey indicates that Microsoft's strategy makes it the No. 1 go-to platform vendor among the 472 application developers participating in the survey.  In fact, more than seven out of 10 (76 percent) said they were partnering with Microsoft to deliver custom applications for their clients.  That number is nearly three times the percentage of application developers (26 percent) who said they were working with IBM ..."  Percentages as follows:  Microsoft, 76%; IBM, 26%; Oracle, 25%; MySQL, 23%; Red Hat, 17%; Sun, 16%; Novell, 11%; BEA, 9%.  I said BOTH, NOT ALL.  Think Microsoft and IBM.  However, a Java strategy could be BOTH a Sun AND IBM strategy (and even a BEA strategy).  See .
There was another article I liked called, "How to Team With A Vendor," although it's not part of the app-dev special section per se.  This posting is too long, so I'll either save it for later or now note that it has been urled.  See .  Also a kind of funny article on turning an Xbox into a Linux PC.  See .  See also .
Quick note:  I'll be in SH and HZ most of next week, so I may not publish again until the week of the 23rd.
David Scott Lewis
President & Principal Analyst
IT E-Strategies, Inc.
Menlo Park, CA & Qingdao, China (current blog postings optimized for MSIE6.x) (access to blog content archives in China) (current blog postings for viewing in other browsers and for access to blog content archives in the US & ROW) (AvantGo channel)
To automatically subscribe click on .

          [humor] The Mind of an American Programmer (courtesy of Sun Microsystems)        
Monday, July 19, 2004
Dateline: China
Go to: .  A wee bit off topic, but a great perspective on the thoughts of an American programmer ... I mean, "developer."  This is the funniest thing I've seen in a while; it accurately captures life in Silicon Valley.  Even better than Dilbert (although yesterday's Dilbert on execs collecting "trophy wives" was pretty good). 
There's audio with the animation, so turn on your speakers and turn up the volume.  Watch out for the jab at IBM Global Services ...
Next:  As promised, the blog posting on "The Evolution of New Technologies," a review of five emerging technologies.
David Scott Lewis
President & Principal Analyst
IT E-Strategies, Inc.
Menlo Park, CA & Qingdao, China

          [news] A Special Report on Business Intelligence        
Thursday, July 8, 2004
Dateline: China
One of my favorite industry trades, Computerworld, recently published a special report on business intelligence (BI).  (See .)  As regular readers of this blog know, I'm hot, hot and hotter on BI.  Not only are BI apps booming in their own right, but BI also provides an open door into other structured data apps (e.g., ERP and SCM).  Also, there is a burgeoning number of apps requiring both BI and knowledge management (KM) solutions, providing a host of new opportunities.  (For now, think of BI for structured data and KM for unstructured data.  But the lines between KM and BI are blurring.)
The Computerworld report includes an introduction to BI titled, "BI for the Masses," an introduction to Web harvesting, and a superb article on text mining; there are several online exclusives as well.  In this post, I'm going to focus on an article titled, "Predictions for BI's Future," by providing excerpts with commentary.  As usual, items in bold are MY emphasis; items in red are MY commentary.
Embedded BI.  "Over the next four to six years, BI systems will become embedded in small, mobile devices, such as manufacturing sensors and PDAs in the field, which in turn will be linked to more centralized systems." -- Erik Thomsen, distinguished scientist, Hyperion Solutions Corp., Sunnyvale, Calif.
PB DM (petabyte data mining).  "Within three years, companies and governmental agencies will be able to successfully run analytics within a centralized data warehouse containing 1 petabyte or more of data -- without performance limitations." -- Dave Schrader, technology futurist, Teradata, a division of NCR Corp., El Segundo, Calif.
HPC to the rescue!  "Within the next two to three years, high-performance computing technology used by scientific and engineering communities and national R&D labs will make its way into mainstream business for high-performance business analytics. This transition will be driven by the growing volume of complex data and the pressing need for companies to use forecasting and predictive analytics to minimize risk and maximize profit-generating opportunities." -- Phil Fraher, chief operating officer, Visual Numerics Inc., San Ramon, Calif.
BI meets AI.  "In the near future, business leaders will manage by exception, and automated systems will handle significant loads of routine tasks." -- Mike Covert, chief operating officer, Infinis Inc., Columbus, Ohio
Visualization.  "Over the next two to three years, BI systems will automatically suggest appropriate visualizations, which in turn will dramatically increase the use of visualization and our understanding of complex relationships." -- Erik Thomsen, distinguished scientist, Hyperion Solutions
BI + BPM + BAM.  "Businesses need more than a rearview mirror to drive their business forward into the next era. A new category of intelligence tools will emerge over the next two to three years that combines business process management, business activity monitoring (BAM) and business intelligence to enable the "actively managed enterprise." This will combine the scorecards and rearview-analysis capabilities of BI with the real-time, event-driven analysis of BAM and feed that information into automated business processes for on-the-fly steering of the business towards scorecard goals. This will exponentially elevate the speed at which businesses are able to operate, adapt and make critical decisions." -- Tim Wolters, chief architect of business activity monitoring solutions, webMethods Inc., Fairfax, Va.
Bottom line:  Go to a BI-related ACM or IEEE CS conference and you'll hear a lot of presentations on all of the apps described above.  It's where the rubber meets the road:  This stuff is real!!  However, it's important to differentiate "real" BI with much more simplistic reporting software (like a good "chunk" of the so-called BI solutions provided by Business Objects, Cognos and even Microsoft -- via their recent acquisition of ActiveViews).
A BI Site to Review
Last week I came across a paper published in the current issue of the Journal of Intelligent and Fuzzy Systems.  In this paper the project called "Data Mining and Decision Support for Business Competitiveness: A European Virtual Enterprise" (SolEuNet) is used as a case study and "the source of lessons learned."  The paper provides a link to the SolEuNet Web site (see ); at the SolEuNet site I found a wealth of case studies with supporting technical documents on leading-edge BI apps (see, for example, Workpackage 7 on "Combining Data Mining and Decision Support with Information Systems" at ).  Remember, strategy consulting isn't merely about comparing product specs (regardless what the IT advisory services may say).
The Gartner Conference on BI
I got my hands on three i-banking analyst reviews of the Gartner BI conference.  The Morgan Stanley report (dated 27 April) noted that customer activity levels appeared to be strong and "many seem to be taking a more strategic approach to BI, resulting in the emergence of larger transactions."  (My emphasis.)  Corporate performance management (CPM) is driving some of the larger deals, with Cognos and Hyperion taking the lead.  Evidently, systems integrators (SIs) are getting religion and developing collaterals around CPM messaging.  RBC Dominion Securities produced a more in-depth report (dated 29 April) and noted the following:
  • Gartner expects the market to accelerate in 2004.
  • The ETL (extraction, transformation, and load) market will flatten (finally).
  • CPM is hot.  "Hyperion, Cognos, and SAS appeared to be the best positioned non-ERP vendors to capitalize on the CPM market opportunity."  However, "(they) believe that SAP is the best-positioned large enterprise software vendor to execute in both the BI and CPM market ..."
  • Finally, the Gartner BI conference itself was hot, with 973 attendees, an increase in attendance of 70% over last year.
UBS chimed in with their own report (dated 30 April), which in some ways was a bit more technical than the other two reports cited above.  UBS noted that heterogeneous environments require independent tools (e.g., it is very difficult to get heterogeneous data into an ERP data warehouse <DW>).  Gartner's rule of thumb is that an ERP-derived BI/DW solution should be on the short-list only if more than 60% of an organization's BI data resides within that single app vendorUBS also noted that the importance of BI is leading to the formation of BI competency centers.  They also believe that SAP and Microsoft remain significant long-term threats to the independent software vendors such as Cognos and Business Objects.  BTW, all three reports seemed a bit down on Business Objects.
Another Computerworld feature on BI
Sometimes advertorials can be a good thing.  A case in point is the 26 April issue of Computerworld which provides a link to a new, six page Computerworld White Paper on BI.  The paper is titled, "Charting the Course: A Guide to Evaluating Business Intelligence Products"; it's a good, practical read.  Tactical, product spec advice and guidelines, but still a good read.  The PDF can be found at .
Recent Tidbits on BI
The New Straits Times (Malaysia) via Asia Africa Intelligence Wire reported on 24 June that SAS "expects the BI market in Asia to register double-digit growth for the next five years.  (Don Cooper Williams, director of marketing and alliances for SAS Asia-Pacific) cites a recent report from research house International Data Corp, which predicts that BI software market in the region (excluding Japan) to grow by 12 per cent this year, up from 7.5 per cent in 2003."  Note to SIs in China:  BI isn't just hot in the States; leverage your skills for serving the U.S. market and the domestic market.
From the channel, India Business Insight (also via Asia Africa Intelligence Wire) on 31 May announced that "Business Objects has entered into a long-standing systems integrator agreement with Wipro Infotech (WI) to provide business intelligence (BI) solutions to customers."  Note to SIs in China:  Don't be left without a dance partner.
Additional Articles for Review
I did a quick scan of trade lit and found a few articles worth reading.  First, the March-April issue of Financial Executive talks about CPM -- Corporate Performance Management -- as it relates to BI.  The May issue of Insurance & Technology takes a vertical look at BI (rather basic apps), as does the April issue of Business Credit.  Always think verticals.
A Final Wrap (or Should I Say, "Rap"?)
Back to Computerworld.  More specifically, see the 29 March issue of Computerworld.  According to a survey conducted by IBM Business Consulting Services, BI is a high priority on the plate of C-level execs.  In a Computerworld poll, 39% of IT executives listed business intelligence projects as their most critical IT projects.  By 2005, market research firm IDC projects that the worldwide market for business intelligence software will total about $6 billion -- up from $2.5 billion in 2003 -- signaling a major increase in business intelligence projects.  IT executives say the skills they need on business intelligence projects include systems integration, data modeling, database administration, data standardization and project management.
David Scott Lewis
President & Principal Analyst
IT E-Strategies, Inc.
Menlo Park, CA & Qingdao, China (current blog postings optimized for MSIE6.x) (access to blog content archives in China) (current blog postings for viewing in other browsers and for access to blog content archives in the US & ROW) (AvantGo channel)
To automatically subscribe click on .

          [news] IT Spending Trends        
Tuesday, July 6, 2004
Dateline: China
A quick recap on IT spending trends from three recently published Smith Barney surveys.  The three reports are the May and June editions of their CIO Vendor Preference Survey and the 6 June issue of softwareWEEK.  Tom Berquist, my favorite i-banking analyst, was the lead for all three reports.  I have a backlog of blogs to write, so I'll use as many quotes as possible and add context where necessary.  (I'm mostly extracting from my smartphone bookmarks for these reports.  Warning:  I may have coded the May and June issues incorrectly, but the quotes are correct.)  NOTE:  Highlighted items (e.g., items in bold, like this sentence) are MY emphasis.  Items in red are my commentary.
Starting with the Survey editions, "(t)he strongest areas of spending appear to be software (apps, security, storage, and database) and network equipment/apps (Gigabit Ethernet, WLAN, VPNs)" and regarding software, "larger and more well known vendors continue to dominate the list in each category with vendors such as Microsoft, SAP, IBM, Veritas, Symantec and Computer Associates getting significantly more mentions in each of their groups than the remaining vendors did."  However, the report admits that their sample group might be biased.  Yes, vendors matter -- and so do vendor partnering strategies.  However, I'm a bit skeptical about CA and I don't particular care very much for Veritas or Symantec.  Not my part of the universe.
"Applications again stand out as a clear area of strength."  "Within applications, Enterprise Resource Planning (ERP), Supply Chain Management (SCM), Customer Relationship Management (CRM) and Business Intelligence (BI) all showed extremely well ..."  Well, this is the first sign that a recovery may be in the making for SCM.  However, I'd emphasize BI and ERP, followed by CRM; don't count on a lot happening in the SCM space just yet.  Some other key surveys do NOT validate that SCM is in recovery.  "In terms of specific vendors, Microsoft, Symantec, Veritas, SAP, and Adobe were the top beneficiaries of CIOs intentions to increase spending."  The report continues that only SAP showed statistically significant results, both in ERP and SCM.  "Results were more mixed for best-of-breed vendors in this area, suggesting that horizontal applications vendors are having a tough time competing with the large ERP vendors even as vertically-focused vendors continue to have some measure of success on this front."  For the more adventurous SIs in China, SAP presents a lot of opportunities.  Tread carefully, though.  And "Adobe's enterprise strategy appears to be gaining momentum.  Adobe was a clear standout in content management ..."  "Survey results were also positive (though somewhat less so) for other leading content management players, notably Microsoft and IBM."  Another "win" for Microsoft.  Funny that none of the traditionally leading content management players were mentioned.  A take on Linux:  "Linux continues to garner mind share, but large enterprises remain the main adopter.  Interestingly, nearly 83% of our respondents stated that they were not currently moving any applications to Linux.  Of the 17% that said they were moving applications to Linux, only one company under $1.0 billion in revenue was making the transition to Linux confirming our views that Linux is primarily being used by large companies to shift Unix applications to Linux on Intel."
"Among CIOs who indicated a higher level of consulting spend, IBM was the clear winner, followed by Accenture as a distant second.  Unisys was also mentioned as a vendor being considered, but it was a distant third.  However, we note that Unisys being mentioned ahead of a pure-play consultant like BearingPoint (a low number of mentions, which included mentions of decreased spending) or EDS is positive, given that Unisys chooses to focus in 2 specific verticals, including one-public sector-that was not in the survey."  "Over two-thirds of CIOs indicated that they do not use IT outsourcers.  Most of the rest said they were unlikely to change the level of outsourcing spend.  IBM, ACS and CSC were the only vendors explicitly mentioned as likely to get more outsourcing business."  The "two-thirds" figure will likely change in favor of outsourcing.  This trend is fairly clear.  See a BCG report at , although the report takes a relatively broad perspective.
From softwareWEEK, "(t)he CIOs were also very focused on rapid 'time to market' with purchases.  None were interested in starting projects that would take greater than 2 quarters to complete."  "This requirement was not a 'payback' requirement, but rather an implementation time frame requirement.  The CIOs did recognize that payback times could be longer, though the payback times on IT utility spending were much shorter than on applications or emerging area spending."
"In terms of spending, the CIOs all used a similar methodology for making decisions that essentially divides their IT spending into one of three categories: 1) sustained spending on their 'IT utility' (i.e., infrastructure such as network equipment, servers, storage, databases, etc.); 2) new project spending on applications (business intelligence, portals, CRM, etc.); and 3) investment spending on select emerging areas (grid/utility computing, identity management, collaboration, etc.)  It was pretty obvious that the CIOs recognized that business unit managers were more interested in spending on new applications/emerging areas than on the IT utility ..."  "(S)ome of the CIOs were experimenting with grid/utility computing initiatives to try to increase their utilization of storage/servers and reduce the amount of new equipment to be purchased.  In one example, a CIO showed their storage/server utilization around the world and many regions were in the 50% or worse bucket for average utilization.  Their goal was to use grid computing architectures and storage area networks (along with faster communication links) to better share the pool of resources."  Yes, this is it!!  Take this to heart!!  If you think grid and utility computing are Star Trek stuff, think again.
"In terms of new projects, the CIOs mentioned they were spending on business intelligence, portal/self-service applications, CRM, and collaboration.  Collaboration was a heated discussion, with all CIOs commenting that this was a big problem for them and there was no clear solution on the market.  While it wasn't completely clear to the audience what the CIOs were looking for in a collaboration solution, the elements that were described included: more intelligent email, corporate instant messaging, web conferencing, integrated voice over IP with instant messaging (so that a conversation could quickly shift from typing to talking), and collaborative document editing (spreadsheets, presentations, publications, etc.).  Within the business intelligence arena, business activity monitoring was discussed as was building of enterprise data warehouses/data marts.  The portal/self-service applications being built or deployed were primarily for customer and employee self-service (remote access to email, applications, and files was a big deal for all of the companies).  On the CRM front, the discussion from one CIO was around their need to increase revenues and manage channel conflict better."  [I'll be posting to this blog a bit more about collaboration opportunities over the next week.]
"While vendors were not discussed in any detail during the panel, the CIOs did say that they remain open to working with smaller vendors (public and private) as long as they have plenty of relevant references (in their industry, particularly with close competitors) and they offer a compelling value proposition versus larger vendors.  One CIO stated that they get called by 20 startups a week to sell products to them, but most of them cannot articulate the value proposition of their product.  Nonetheless, the CIO does take 5 meetings a month from startups because some of them are working on things that are interesting to the business."
Whew ...  Lots of good materials.  To reiterate, all highlighted items are my emphasis.  Bottom line:  The market is heating up.  Get your ISV relationships in place.  Pick your verticals (see the "Tidbit on Microsoft" which follows).  Pick your apps -- and the apps I like the best are content management and BI, although ERP is looking good, too.  Collaboration can be a major source of revenue if the SI can provide a truly effective solution.
Tidbits on Microsoft
A quick update on some happenings in the Redmond universe.  (See ; the article is titled, "Microsoft focuses on its enterprise-applications business".)  First, app areas that are of particular interest to MS include those for manufacturing and life sciences.  So, how about a MS build-to-their-stack strategy focused on either of these two verticals?  Second, MS is moving beyond purely horizontal offerings to very specific functionality.  Their Encore acquisition is an example of MS moving in this direction.  Finally, new releases of all four of Microsoft's ERP product lines are due for this year.  Not surprisingly, MBS marketing is up 20% from FY04.  Hmmm ... ERP spending intentions are strong and MS is a key player in this space -- with several updated offerings scheduled for release this year.  Another opportunity?
Tidbits on Infosys
Infosys formally enters the IT strategy consulting biz.  (See .)  Yes, it was inevitable.  In April Infosys Consulting, Inc. was formed and, "(i)t's no secret that the winning model will be high-end business consulting combined with high-quality, low-cost technology delivery done offshore," according to Stephen Pratt, the head of Infosys' consulting unit.  The Infosys Consulting unit now has 150 employees in the States and plans to expand to 500 within three years.  Note to SIs in China:  You need more -- a lot more -- IT strategy types  And you need people in the States (at least on an "as needed" basis) in order to capture -- and serve -- new accounts.
David Scott Lewis
President & Principal Analyst
IT E-Strategies, Inc.
Menlo Park, CA & Qingdao, China (current blog postings optimized for MSIE6.x) (access to blog content archives in China) (current blog postings for viewing in other browsers and for access to blog content archives in the US & ROW) (AvantGo channel)
To automatically subscribe click on .

          GE, ARM and IBM voted most influential IoT companies | FC Business Intelligence        

Industry ide survey uncovers the most powerful companies, people and innovators in the Internet of Things

(PRWeb February 19, 2015)

Read the full story at

          Comentário sobre Mapa astral 2.0: IBM Watson mostra que é melhor que Astrologia por Marina Brasil        
O.O/ Sensa
          Comentário sobre Mapa astral 2.0: IBM Watson mostra que é melhor que Astrologia por Laura Paladim        
Marina Brasil
          Comentário sobre Mapa astral 2.0: IBM Watson mostra que é melhor que Astrologia por Lara de Pina        
Vai que Amanda Daher Elias.. kkkk
          Japan Post Group, IBM en Apple ontwikkelen iPads en aangepaste apps om ouderen in Japan te verbinden met voorzieningen, hun familie en de gemeenschap        
          Partial Telecommute Automation Subject Matter Expert in the Walnut Creek Area        
A staffing and recruitment firm needs applicants for an opening for a Partial Telecommute Automation Subject Matter Expert in the Walnut Creek Area. Candidates will be responsible for the following: Driving Enterprise Change Management and Configuration Management process improvement projects Applying deep expertise in delivering process automation and ServiceNow Programming automation of report and scorecard creation via integration of reporting tools, Remedy and ServiceNow Skills and Requirements Include: Partial telecommute; must work some of the time at offices in Walnut Creek or Corona, CA Bachelor's Degree in CIS, Business Administration, or related field, or equivalent experience Minimum ten (10) years IT experience within a large matrixed organization Software Developer experience in: HTML, HTML5, CSS, CSS3, PHP, MySql, Ruby, etc Deep working knowledge of automation and reporting tools such as HPOO, IBM Rational, BluePrism, Tableau and others All other requirements listed by the company
          Apple en IBM transformeren zakelijke markt met nieuwe mobiele oplossingen        
          Apple en IBM brengen eerste lichting IBM MobileFirst for iOS Apps uit        
          IBM BPM Designer/Developer        

          Galileo Performance Explorer to Showcase Cloud Migration Capabilities at IBM InterConnect 2017 Las Vegas        

Silver Sponsor Galileo will demonstrate solutions for monitoring on premise, public or private cloud environments that take the mystery out of cloud migration and monitoring.

(PRWeb March 18, 2017)

Read the full story at

          Galileo Announces New Infrastructure Performance Management Agent for Cisco/IBM VersaStack Solution        

Galileo’s integrated deep reporting and predictive analytics capabilities align IT technology to business objectives to see everything and miss nothing.

(PRWeb September 19, 2016)

Read the full story at

          Galileo Performance Explorer Adds IBM DS8000 Storage Array Agent to its Infrastructure Performance Management Suite        

Newest cloud-based capabilities continue vision to provide greater insight into storage systems across all industry vendors.

(PRWeb June 23, 2016)

Read the full story at

          Galileo Named Among 25 IBM Solutions Transforming Business by CIO Solutions        

IT infrastructure performance management suite supports data centers on premise, in the Cloud, or part of a hybrid approach

(PRWeb June 17, 2016)

Read the full story at

          Big Blue shows off fastest graphene transistor        

55 billion cycles which is more than Bejing
IBM has been showing off its latest graphene transistor that can execute 155 billion cycles per second. It is about 50 percent faster than previous experimental transistors.

The new transistor has a cut-off frequency of 155GHz. The previous one could manage 100GHz  and it was shown off last year.

Top Big Blue boffin Yu-Ming Lin said that the research also shows that high-performance, graphene-based transistors can be produced at low cost using standard semiconductor manufacturing processes. In other words commercial production of graphene chips is not far away.

Graphene is a single-atom-thick layer of carbon atoms structured in a hexagonal honeycomb form. It could be used for high-performance RF (radio frequency) transistors.

Electrons move faster on graphene transistors than conventional transistors, which enables faster data transfers. Unfortunately they are not ideal for PCs yet, because they do not have the on-off ratio required for digital switching operations. But it is good at processing analog signals.

          IBM Domino 9 PDF Export Tool 1.0        
IBM Domino 9 PDF Export Tool to export Lotus Domino DXL Database.
          Comment on IBM to invest $200m in global Watson IoT HQ by IBM to invest $200m in global Watson IoT HQ | F...        
[…] Technology giant IBM has announced that it will invest $200m in the new global headquarters for its Watson IoT business in Munich.  […]
          ÐžÐ±Ð½Ð¾Ð²Ð»ÐµÐ½Ð¸Ðµ компонентов в продуктах Dr.Web 11.0 для Windows, лечащей утилите Dr.Web CureNet!, Dr.Web KATANA 1.0, Dr.Web Enterprise Security Suite 10.0 и 10.1, Dr.Web 11.0 для MS Exchange, Dr.Web 11.0 для Microsoft ISA Server и Forefront TMG, Dr.Web 11.0 для IBM Lotus Domino и Dr.Web AV-Desk 10.0        

24 июля 2017 года

Компания «Доктор Веб» сообщает об обновлении сканирующего сервиса Dr.Web Scanning Engine (, антируткитного модуля Dr.Web Anti-rootkit API (, и модуля Dr.Web for Outlook Plugin ( в ряде продуктов Dr.Web. Обновление связано с внесением внутренних изменений.

В сканирующий сервис внесены изменения, направленные на повышение стабильности работы компонента.

В антируткитном модуле Dr.Web Anti-rootkit API был доработан механизм лечения системных модулей.

В плагине Dr.Web for Outlook Plugin, входящем в состав продуктов Dr.Web для Windows, Dr.Web Enterprise Security Suite и Dr.Web AV-Desk, устранена проблема, при которой после перезапуска процесса dwantispam.exe могла не работать проверка на спам, а также оптимизирована загрузка плагина на старте Microsoft Outlook.

Кроме того, для Dr.Web Enterprise Security Suite 10.0 и 10.1 устранена причина ложного срабатывания Превентивной защиты при совместной работе с программными продуктами Infowatch.

Обновление пройдет автоматически, однако потребует перезагрузки компьютеров.


Good News!!!

The IBM Hursley Club now has access to take American Express cards. In line with other credit and debit cards we take, the minimum spend is £5.00

          Pre-Ordered Lunchtime Sandwiches, Baguettes and Salads        

IBM Hursley Club runs a pre-order lunch time service for its members. You can find it under Bar & Catering in the Navigation Menu on the left.

This gives you the choice to order any fillings from Chicken Mayo to Cheese Salad, on white or wholemeal baguettes or sandwiches!

Salads can also be ordered online via this website.

Choose from ;
Chicken Salad £2.95 Chicken Ceasar Salad £3.25
Plain Salad £1.95 Tomato Pasta Salad £2.65
Prawn Salad £3.25 Tuna Salad £2.95

read more

          Evening Meal deal        

The IBM Hursley Club serves meals in the evening until 9pm. When you purchase a meal off the Meal Board, your first drink is included in the price.

Note: The IBM Hursley Club is a members Club open to members and their guests.

          Chapter 1        
What is a pointer?
One of those things beginners in C find difficult is the concept of pointers.
The purpose of this tutorial is to provide an introduction to pointers and their
use to these beginners.
I have found that often the main reason beginners have a problem with pointers
is that they have a weak or minimal feeling for variables, (as they are used in
C). Thus we start with a discussion of C variables in general.
A variable in a program is something with a name, the value of which can vary.
The way the compiler and linker handles this is that it assigns a specific block
of memory within the computer to hold the value of that variable. The size of
that block depends on the range over which the variable is allowed to vary. For
example, on PC's the size of an integer variable is 2 bytes, and that of a long
integer is 4 bytes. In C the size of a variable type such as an integer need not
be the same on all types of machines.
When we declare a variable we inform the compiler of two things, the name of the
variable and the type of the variable. For example, we declare a variable of
type integer with the name k by writing:
int k;

On seeing the "int" part of this statement the compiler sets aside 2 bytes of
memory (on a PC) to hold the value of the integer. It also sets up a symbol
table. In that table it adds the symbol k and the relative address in memory
where those 2 bytes were set aside.
Thus, later if we write:
k = 2;

we expect that, at run time when this statement is executed, the value 2 will be
placed in that memory location reserved for the storage of the value of k. In C
we refer to a variable such as the integer k as an "object".
In a sense there are two "values" associated with the object k. One is the value
of the integer stored there (2 in the above example) and the other the "value"
of the memory location, i.e., the address of k. Some texts refer to these two
values with the nomenclature rvalue (right value, pronounced "are value") and
lvalue (left value, pronounced "el value") respectively.
In some languages, the lvalue is the value permitted on the left side of the
assignment operator '=' (i.e. the address where the result of evaluation of the
right side ends up). The rvalue is that which is on the right side of the
assignment statement, the 2 above. Rvalues cannot be used on the left side of
the assignment statement. Thus: 2 = k; is illegal.
Actually, the above definition of "lvalue" is somewhat modified for C. According
to K&R II (page 197): [1]
"An object is a named region of storage; an lvalue is an expression referring
to an object."
However, at this point, the definition originally cited above is sufficient. As
we become more familiar with pointers we will go into more detail on this.
Okay, now consider:
int j, k;

k = 2;
j = 7; <-- line 1 k = j; <-- line 2 In the above, the compiler interprets the j in line 1 as the address of the variable j (its lvalue) and creates code to copy the value 7 to that address. In line 2, however, the j is interpreted as its rvalue (since it is on the right hand side of the assignment operator '='). That is, here the j refers to the value stored at the memory location set aside for j, in this case 7. So, the 7 is copied to the address designated by the lvalue of k. In all of these examples, we are using 2 byte integers so all copying of rvalues from one storage location to the other is done by copying 2 bytes. Had we been using long integers, we would be copying 4 bytes. Now, let's say that we have a reason for wanting a variable designed to hold an lvalue (an address). The size required to hold such a value depends on the system. On older desk top computers with 64K of memory total, the address of any point in memory can be contained in 2 bytes. Computers with more memory would require more bytes to hold an address. Some computers, such as the IBM PC might require special handling to hold a segment and offset under certain circumstances. The actual size required is not too important so long as we have a way of informing the compiler that what we want to store is an address. Such a variable is called a pointer variable (for reasons which hopefully will become clearer a little later). In C when we define a pointer variable we do so by preceding its name with an asterisk. In C we also give our pointer a type which, in this case, refers to the type of data stored at the address we will be storing in our pointer. For example, consider the variable declaration: int *ptr; ptr is the name of our variable (just as k was the name of our integer variable). The '*' informs the compiler that we want a pointer variable, i.e. to set aside however many bytes is required to store an address in memory. The int says that we intend to use our pointer variable to store the address of an integer. Such a pointer is said to "point to" an integer. However, note that when we wrote int k; we did not give k a value. If this definition is made outside of any function ANSI compliant compilers will initialize it to zero. Similarly, ptr has no value, that is we haven't stored an address in it in the above declaration. In this case, again if the declaration is outside of any function, it is initialized to a value guaranteed in such a way that it is guaranteed to not point to any C object or function. A pointer initialized in this manner is called a "null" pointer. The actual bit pattern used for a null pointer may or may not evaluate to zero since it depends on the specific system on which the code is developed. To make the source code compatible between various compilers on various systems, a macro is used to represent a null pointer. That macro goes under the name NULL. Thus, setting the value of a pointer using the NULL macro, as with an assignment statement such as ptr = NULL, guarantees that the pointer has become a null pointer. Similarly, just as one can test for an integer value of zero, as in if(k == 0), we can test for a null pointer using if (ptr == NULL). But, back to using our new variable ptr. Suppose now that we want to store in ptr the address of our integer variable k. To do this we use the unary & operator and write: ptr = &k; What the & operator does is retrieve the lvalue (address) of k, even though k is on the right hand side of the assignment operator '=', and copies that to the contents of our pointer ptr. Now, ptr is said to "point to" k. Bear with us now, there is only one more operator we need to discuss. The "dereferencing operator" is the asterisk and it is used as follows: *ptr = 7; will copy 7 to the address pointed to by ptr. Thus if ptr "points to" (contains the address of) k, the above statement will set the value of k to 7. That is, when we use the '*' this way we are referring to the value of that which ptr is pointing to, not the value of the pointer itself. Similarly, we could write: printf("%d\n",*ptr); to print to the screen the integer value stored at the address pointed to by ptr;. One way to see how all this stuff fits together would be to run the following program and then review the code and the output carefully. ------------ Program 1.1 --------------------------------- /* Program 1.1 from PTRTUT10.TXT 6/10/97 */ #include

int j, k;
int *ptr;

int main(void)
j = 1;
k = 2;
ptr = &k;
printf("j has the value %d and is stored at %p\n", j, (void *)&j);
printf("k has the value %d and is stored at %p\n", k, (void *)&k);
printf("ptr has the value %p and is stored at %p\n", ptr, (void *)&ptr);
printf("The value of the integer pointed to by ptr is %d\n", *ptr);

return 0;

Note: We have yet to discuss those aspects of C which require the use of the
(void *) expression used here. For now, include it in your test code. We'll
explain the reason behind this expression later.

To review:
A variable is declared by giving it a type and a name (e.g. int k;)
A pointer variable is declared by giving it a type and a name (e.g. int *ptr)
where the asterisk tells the compiler that the variable named ptr is a pointer
variable and the type tells the compiler what type the pointer is to point to
(integer in this case).
Once a variable is declared, we can get its address by preceding its name with
the unary & operator, as in &k.
We can "dereference" a pointer, i.e. refer to the value of that which it
points to, by using the unary '*' operator as in *ptr.
An "lvalue" of a variable is the value of its address, i.e. where it is stored
in memory. The "rvalue" of a variable is the value stored in that variable (at
that address).
References for Chapter 1:
"The C Programming Language" 2nd Edition
B. Kernighan and D. Ritchie
Prentice Hall
ISBN 0-13-110362-8

Continue with Pointer Tutorial
Back to Table of Contents

          10 Teknologi Masa Depan.!!        
Manusia akan segera memasuki masa depan.
Teknologi bergerak sedemikian cepatnya sehingga dalam waktu tidak lama lagi seluruh dunia akan berubah besar-besaran. Teknologi-teknologi baru yang sedang dikembangkan benar-benar revolusioner, hal-hal yang nyaris tidak pernah terbayangkan sebelumnya oleh para ilmuwan dan hanya ada dalam khayalan-khayalan manusia.

Di suatu hari nanti, manusia mungkin bisa hidup ratusan tahun tanpa mengenal penyakit, memiliki kecerdasan yang genius, dan pergi bertamasya ke luar angkasa.

Ini adalah beberapa teknologi revolusioner yang diperkirakan akan merubah seluruh dunia :
1. Mesin-mesin Cerdas Seukuran Atom, Nanoteknologi
2. Zaman Manusia-manusia Super, Rekayasa Genetika
3. Energi terdahsyat di Alam Semesta, Fusi Nuklir
4. Regenerasi Wolverine, Stem Cell
5. Komputer Kuantum
6. Baju Menghilang Harry Potter, Metamaterial
7. Space Elevator, Tangga Menuju Bintang-bintang
8. Scramjet
9. Hidup Ratusan Tahun, Resveratrol.
10. Penyatuan Manusia dan Mesin, Singularitas.

1. Mesin-mesin Cerdas Seukuran Atom, Nanoteknologi
“Coal and diamonds, sand and computer chips, cancer and healthy tissue: throughout history, variations in the arrangement of atoms have distinguished the cheap from the cherished, the diseased from the healthy. Arranged one way, atoms make up soil, air, and water. Arranged another, they make up ripe strawberries. Arranged one way, they make up homes and fresh air; arranged another, they make up ash and smoke”.Eric Drexler, Engines of Creation

Nanoteknologi adalah segala teknologi masa depan yang memungkinan manusia memanipulasi partikel-partikel super kecil yang besarnya nyaris seukuran atom! Nanometer sendiri adalah ukuran 1/semilyar meter, atau nyaris ketebalan rambut dibelah 50.000. Itulah kedahsyatan teknologi nano.

Tujuannya adalah menciptakan material-material baru masa depan, bahkan mesin-mesin dan robot-robot seukuran partikel. Material-material itu akan bisa lebih kuat dari intan, super ringan, tahan panas dan dingin dengan skala yang ekstrim, mampu menghantarkan listrik lebih baik, lebih tahan lama, ramah lingkungan dan seterusnya.

Kemungkinan aplikasinya benar-benar dahsyat dan akan merubah seluruh dunia. Bayangkan bila kita bisa menciptakan berbagai material baru yang lebih keras dari intan, dan jauh lebih ringan dari baja. (Carbon nanotubes, sp2 bond). Kita bisa menciptakan kerangka super kuat untuk mobil, pesawat terbang, atau bangunan dan jembatan. Dengan bobotnya yang lebih ringan, semua mobil dan pesawat juga akan lebih hemat energi.

Kita bisa menciptakan baju anti kusut dan tahan noda. Kita juga bisa menciptakan robot berukuran bakteria, nanobots, dan memasukanya ke dalam tubuh manusia. Fungsinya bisa dari menyembuhkan penyakit, menghancurkan sel-sel kanker, bahkan memperkuat tubuh manusia (Feynman, ”Swallowing the Doctor”). Nanobots ataupun nanoparticles bahkan nantinya diperkirakan juga akan bisa kembali menutup lubang ozon.

Dengan komponen seukuran nano, kita bisa membuat supercomputer sebesar kotak korek api, dan media penyimpanan data yang menyimpan jutaan gigabyte informasi tentang umat manusia dan seluruh alam semesta, sebesar seujung kuku.
Bagaimana teknologi ini bisa dilakukan? Karena mikroskop super-canggih yang dapat melihat atom sudah ada sejak 1981, Scanning Tunneling Microscope (STM), dan Atomic Force Microscope (AFM, 1986).
2. Era Manusia-manusia Super, Rekayasa Genetika

“Human genetic engineering has the potential to change human beings' appearance, adaptability, intelligence, character, and behaviour. It may potentially be used in creating more dramatic changes in humans”.
Wikipedia Genetic Engineering.

Manusia telah berhasil memetakan gennya dalam proyek raksasa “The Human Genome Project”. Dengan data ini manusia mempunyai peta informasi untuk mengeksplorasi fungsi dan potensi dari tiap gen dalam tubuh manusia. Mulai dari gen yang menentukan bentuk fisik manusia, gen penyebab kanker, gen yang membentuk ingatan, gen yang menciptakan kecerdasan, bahkan gen khusus yang mengatur proses penuaan.

Ini nantinya akan memungkinkan dilakukannya rekayasa genetika untuk menciptakan manusia-manusia masa depan yang sangat unggul. Manusia dengan kesehatan sempurna, terbebas dari penyakit, berumur lebih dari 100 tahun dan mempunyai kecerdasan mendekati genius.

Bayangkan bila manusia menemukan gen spesial yang membuat Einstein menjadi genius. Lalu gen itu bisa ditransfer ke seluruh umat manusia. Atau keunggulan fisik David Beckham, atau bahkan kharisma John F. Kennedy.

Tapi rekayasa genetika tidak hanya untuk manusia, tapi juga bisa untuk tumbuhan dan hewan ternak. Rekayasa genetika bisa menciptakan padi dan gandum jenis baru dengan hasil panen yang berkali-kali lipat. Kita juga bisa menciptakan daging sapi yang lebih empuk dan gurih. Kita bahkan juga bisa menciptakan tanaman dan hewan konsumsi dengan nilai gizi yang unggul.

3. Energi Terdahsyat di seluruh Alam Semesta, The Power of the Stars

“What would fusion mean? Endless, cheap energy. Amazing Star Trek, space travel possibilities. Fame, fortune, and undoubtedly a Nobel or two for the lucky scientist”.

The Observer, Desember 2000

Matahari, setiap detiknya, mengeluarkan energi sebesar seluruh energi yang digunakan seluruh umat manusia sepanjang sejarahnya. Energi plasma hidrogen raksasa sebesar 380 Milyar-milyar Mega-Watt (380^26 MW), per detiknya. Inilah energi yang dikenal sebagai energi Fusi Nuklir (Nuclear Fusion), The power of the Sun. Dan para ilmuwan dunia sedang berusaha mendapatkannya.
Dan ini adalah energi yang membuat bintang-bintang raksasa di alam semesta terbakar selama milyaran tahun. Energi terdahsyat, di seluruh alam semesta.

Sebuah percobaan besar sedang dilakukan di kota kecil Cadarache di ujung selatan Perancis dalam sebuah proyek bernama ITER. Disini atom Deuterium dan Tritium dilebur dengan panas mencapai 150 juta derajat Celcius, nyaris 10 kali panas inti Matahari. Wadah peleburannya dilindungi oleh medan magnet Tokamak sehingga tidak meleleh.

Hebatnya adalah bahwa Deuterium bisa dihasilkan dari air laut biasa, dan Tritium dibentuk dari Lithium yang bisa didapat dari batu alam. Energi terdahsyat di seluruh alam semesta dari Air dan Batu alam.

Kalau para ilmuwan ini berhasil menciptakannya, maka seluruh dunia akan mempunyai sumber energi baru yang dahsyat menggantikan minyak bumi. Energi ini akan begitu besar dan efisien, tidak terbatas, sangat murah, serta ramah lingkungan.

(Note : penggunaan nanoteknologi dalam sel photovoltaic tenaga surya, nanocrystal, juga dikatakan memiliki potensi energi super besar yang mampu menggantikan minyak bumi).

4. Regenerasi Wolverine, Stem Cell
Bayangkan bila penyakit jantung dan diabetes bisa disembuhkan secara sempurna, orang lumpuh bisa berjalan, dan orang buta, bisa melihat kembali.

Anda pernah melihat seekor cecak, yang bisa menumbuhkan kembali ekornya yang putus dengan sempurna? Atau jika anda penggemar komik ”X-Men”, anda pasti tahu tokoh superhero bernama Wolverine. Saat tubuhnya tertusuk pisau atau tertembus peluru, dia dapat menyembuhkan lukanya dengan nyaris seketika. Ia dapat meregenerasi seluruh sel-sel tubuhnya dengan sempurna, secara instan.
Tapi itu cuma khayalan. Ada sejenis cacing bernama “planarian worm”, yang banyak hidup di laut maupun sungai, yang mampu menumbuhkan ulang bahkan nyaris seluruh tubuhnya.

Planaria, terutama spesies Schmidtea mediterranea, mampu meregenerasi utuh tubuhnya, bahkan bila tinggal sepotong kecil saja tubuhnya yang tersisa, sampai 1/300 bagian. Dan bila kepalanya dihilangkanpun, dia akan menumbuhkan kembali kepalanya dengan sempurna.

Bagaimana jika manusia bisa melakukan itu nantinya? Jika kita dapat secara langsung mengganti semua sel-sel tubuh kita yang rusak dengan sempurna dan tanpa cacat. Para ilmuwan telah nyaris mencapai keajaiban itu. Teknologi biologi molekular bernama Stem Cell, atau Sel Induk. Ini adalah sel paling dasar dari tubuh manusia, yang bisa berubah, atau dirubah, menjadi sel atau organ apapun di tubuh manusia.

Bila anda memiliki penyakit jantung, maka sel jantung itu bisa diganti dengan stem sel dan jantung anda akan berfungsi normal kembali. Bila anda mengalami kebutaan, sel retina anda bisa diganti dengan sel baru dari sel induk dan anda akan bisa melihat kembali.

Jika anda menderita penyakit yang berhubungan dengan fungsi otak seperti stroke, alzheimer atau parkinson, maka sel otak anda yang rusak, bahkan jaringan pusat otak cerebral cortex, bisa diganti dengan stem cell. Dan kalau anda menderita diabetes, maka stem cell akan menyelamatkan anda dengan meregenerasi sel pankreas penghasil hormon insulin.

Stem Cell benar-benar membawa revolusi besar dalam kesehatan umat manusia.

5. Komputer Kuantum

Bayangkan sebuah komputer masa depan, yang kecepatannya ribuan kali lebih cepat dari supercomputer tercepat sekarang. Ribuan kali lebih cepat dan efisien dari IBM ”Roadrunner” di Los Alamos yang kecepatannya mencapai 1.7 petaflops (1 petaflop = 10^15 operasi per detik).

Inilah kedahsyatan komputer kuantum. Komputer ini begitu dahsyat karena diciptakan memakai fenomena keajaiban dunia kuantum, Superposition dan Quantum Entanglement.

Dalam pemecahan kode misalnya (kriptografi), untuk memecahkan kode yang digitnya sampai 140, komputer biasa akan memerlukan waktu milyaran tahun untuk memecahkannya. Tapi dengan komputer kuantum, ini bisa dipecahkan hanya dalam waktu beberapa puluh menit saja.
Dengan komputer ini manusia juga akan bisa memprediksikan cuaca di bumi dan gejala-gejala alam lain yang sangat kompleks dengan sangat akurat berbulan-bulan sebelumnya, seperti gempa bumi dan tornado. Dan tentu saja ini akan makin merevolusikan lagi kecepatan pengembangan seluruh teknologi canggih yang ada sekarang.

6. Jubah Menghilang Harry Potter, Metamaterial
“The announcement last November of an "invisibility shield," created by David R. Smith of Duke University and colleagues, inevitably set the media buzzing with talk of H. G. Wells's invisible man and Star Trek's Romulans”.
MIT Technology Review

Hanya beberapa tahun yang lalu, seluruh ilmuwan ternama dunia masih yakin bahwa tidak ada satupun material di dunia ini yang bisa membuat manusia menghilang. Itu benar-benar tidak mungkin, karena itu melanggar semua hukum alam yang diketahui manusia. Tapi mereka semua salah..
Metamaterial, menjadi salahsatu bahan yang ramai dibicarakan. Bahan ini bisa membuat sesuatu, menjadi tidak terlihat. Sebuah baju yang menggunakan teknologi ini bisa membuat pemakainya ”menghilang”, seperti jubah ajaib dalam ”Harry Potter”.

Sebuah pesawat tempur dengan bahan metamaterial akan jadi tidak terlihat, bukan sekedar tidak terlihat radar seperti teknologi ”Stealth”, tapi benar-benar tidak terlihat mata seperti alat cloaking device dalam Star Trek.

Ini bisa dilakukan misalnya dengan menciptakan material artifisial yang mampu membelokkan radiasi elektromagnetik, demikian pula dengan cahaya, yang pada dasarnya adalah radiasi elektromagnetik. Bahannya bisa seperti timah dan plastik yang diatur dalam struktur pola tertentu.

Metamaterial akan membelokkan cahaya, mengelilingi obyek yang diselimutinya dan berkumpul kembali di ujungnya, seperti air sungai mengelilingi sebuah batu. Dalam penelitian terakhir di Perdue University mereka menggunakan jarum-jarum khusus yang akan membelokkan cahaya melampaui obyek yang diselubungi sementara obyek di belakangnya akan terlihat.

Material ini sedang diteliti di seluruh dunia termasuk di MIT, University of California Berkeley, Duke University, dan Caltech di LA.

7. Space Elevator, Tangga Menuju Bintang-bintang
Space elevator atau Tangga Luar angkasa adalah seperti lift yang sangat tinggi dari bumi menuju ke orbit bumi di luar angkasa, 35.000 kilometer tingginya. Dengan lift ini perjalanan ke orbit bumi akan menjadi lebih mudah, dan murah.
Banyak orang berharap, bahwa program ruang angkasa yang tadinya berhenti sampai di bulan karena sangat mahal, akan bisa dimulai lagi. Dan mungkin impian manusia untuk pergi ke Mars, akan terwujud.

Lift ini awalnya hanya berupa khayalan, tapi ternyata dengan ditemukannya sebuah teknologi baru, hal ini menjadi sangat memungkinkan diwujudkan. Teknologi itu adalah Carbon nanotube, material baru yang dikatakan lebih kuat dari intan dan lebih ringan dari baja.

Hal ini nantinya akan memungkinkan dimulainya era baru dalam penjelahajan ruang angkasa.

8. Memasuki Era Hiper-Sonik, Scramjet
Scramjet akan menjadi salahsatu revolusi terbesar dalam sejarah transportasi dunia. Pesawat tempur tercanggih di dunia sekarang, F/A-22 Raptor milik Amerika berkecepatan maksimal Mach 2, atau 2 kali kecepatan suara. Pesawat penumpang Scramjet, akan membawa anda terbang dengan kecepatan 10 kali kecepatan suara, Mach 10.
Penerbangan dari New York ke Tokyo yang sekarang ditempuh dalam waktu 18 jam yang panjang dan melelahkan, akan ditempuh Scramjet, hanya dalam 120 menit.

Scramjet tidak perlu memakai bahan bakar roket biasa yang mahal dan berat, bahan bakarnya menggunakan hidrogen cair yang dicampur penyedotan oksigen langsung dari atmosfer (air-breathing scramjet engine). Pembakaran hidrogen dan oksigen pada kecepatan supersonik inilah yang akan mengakselerasikan kecepatannya.

Ini akan membuat penerbangan dari satu tempat ke tempat lain di seluruh dunia menjadi super cepat.

9. Fountain Of Youth, Resveratrol
Mungkin, nantinya kita bisa menemukan sesuatu yang memungkinkan kita hidup ratusan tahun. Tapi para ilmuwan mungkin telah menemukannya, sesuatu yang dinamakan “Sirtuin”, Silent Information Regulator 2 (Sir2) proteins dan resveratrol, zat antioxidan yang ternyata banyak ditemukan dalam buah anggur merah (Jadi sering-seringlah makan buah anggur.)

Tapi para ilmuwan juga telah menciptakan sesuatu yang bahkan lebih kuat dari resveratrol yaitu sebuah obat dengan kode, SRT1720.

“SRT1720 is a thousand times more potent than resveratrol, meaning that it could be taken in smaller doses. A person would have to drink hundreds of glasses of wine to get the same health benefits from resveratrol. Resveratrol will pretty soon look like ancient technology,"
David Sinclair, a biologist at Harvard Medical School

10. Singularitas.

Suatu hari nanti, akan datang suatu masa dimana melalui rekayasa genetika seluruh manusia akan mempunyai fisik dan kecerdasan yang nyaris sempurna.
Lalu dengan kemajuan teknologi komputer, komputer kuantum dan nanoteknologi memungkinkan manusia memasukkan Quantum Computer berukuran partikel ke dalam otaknya dan menggunakan partikel-partikel nano untuk makin memperkuat tubuhnya. Ini adalah hal yang dinamakan Singularitas. Penyatuan antara biologi manusia dengan teknologi.


[sypnopsis of lecture delivered for the Ong Siew May Distinguished Lecture Series, 12th September 2008, National University Singapore]

Building typology has always been of interest and as a study for architecture, because it begs the question of whether works of architecture need for classification and grouping to serve its their purpose or if types are necessary in order to validate their functions. So we ask, what is type? Are private houses a building type and do typifying shapes and built forms add any value to buildings? Indeed none whatsoever. By conforming it to any one type of building does not make the work any different or better in any way, let alone give it any more meaning or validation to its functions.

For a long time we always have visual representation for many of the buildings familiar to us hospitals, schools and restaurants, etc. to each we emphasize visual interpretation. now it is more difficult to differentiate between two buildings simply by type. We think the visual representation of architectural works has evolved greatly, houses are of myriad shapes and forms, they defy classification. We can no longer rely on typological sets to differentiate one work from another. Perhaps definitions of buildings or their classification has become totally defunct and unnecessary. We conclude at this juncture typology is no longer a means for us to define a building. How do we proceed then to define a building if at all, and why do we chose to give building definitions in the first place, often one is asks, what is it for? How does all this relate to their sustainability.

By examining typological sets, or typology we gain a deeper insight if not appreciation of how buildings are perceived, and how their rapidly changing forms and articulation and evolving character has given us the means to comprehend them, or perceive them. Our own evolving needs have given buildings their new typology strain, and with this we propose they have become more sustainable today as a type then they have ever been before. How is this? We propose that it does not matter if what a building is meant to do has nothing to do with how it looks. There is little to find in the shape or fenestration of an iconic tower to suggest that it is a commercial building or a residential tower. It has become increasingly more difficult to predict the look and shape of the new museum. Hospitals are so advanced today, their designs warrant the same planning principles as a four star restaurant or a boutique hotel in many instances. Therein lies the question of typology and how their relevance has become very questionable in defining the ideal building, the sustainable built form.

There are no apparent parallels between Farnsworth House and a Palladian Villa, none whatsoever to say they belong to the same set. Both may well be exemplary models of sustainable architecture simply because thye both have their own infinite 'sustain' ability to deal with change, from their original residential use to the new commercial renovations without need for major alteration of its parts, let alone the envelope. Yet another way to cut back energy, waste and pollution of our natural environment.

Fortunately we have come a long way to accept these models as infinitely superior works of architecture simply because they don't actually bear any resemblance to each other and therefore attest to possibility that there is perhaps no such thing as an ideal house type. Surely at this point we see that the correlation between building function and its representation has been completely severed over recent times? These thoughts, derived from an appreciation and study of building typology, and the necessary classification of buildings in order that we can begin to appreciate them or better understand their qualities are examined and put forth here for development of our idea, that a sustainable work is necessarily one that has no need for a specific definition of its type nor its generic functions. It should merely be derived from ones awareness of its durability in terms of its long term use, and its inherent ability to change, adapt, redefine and replace or restore itself to suit the current need.

Flexibility, on the other hand, and open plan, are concepts of the late eighties for office designs, IBM Cosham for one and the new B1- type offices in the UK has indeed brought about successful applications and introduction of new and refreshing planning guidelines. The workplace has evolved to become a new typology for comfort, and productivity rolled into one. No longer is the office environment stale and devoid of the familiar pleasures of the home, like the kitchen [pantry], and the living room [the reception and lounge]. Break out areas, yet another concept has seen their implementation in many an organisations' interior plan. These are concepts that have slowly eroded the meaning of offices, and their outdated typological sets. Modern day workplaces and home spas are the new typological sets, and the are inevitably more sustainable than their forebears. They have evolved and become hybrid buildings in many ways. Herein the new research and proposition. We can examine why this hybrid typology have become infinitely more durable than their earlier examples. Indeed if we remove the need to typify or classify buildings we almost immediately make them less specific and therefore more responsive and future proof and hence more sustainable than if we confine them to a very specific brief.

Often a house is demolished after a new owner moves in. This is the problem we want to address. Often the lifespan of poorly designed buildings, like many hotels and offices built in the 70's have seen irresponsible demolition and extensive refurbishment. This is largely a very energy intensive activity, totally irresponsible and has contributed to great destruction of built environments and added much to the destabilisation of neighbourhoods and demography. Not only is such acitivity pollution to the natural environment it also brings about the need to re-instate new energies into what was already displaced in the first intervention. Put back what we take from the environment is what we think all buildings must do. A built work must put back into its development more than what has been taken out from it, whether there be trees or the use of natural materials. herein lies the next set of investigations.


          Home Network. For Dum Dums.         
Home networking. Most of us depend on our cable/phone company to send us a little gizmo in the mail that we feel good about setting up a password on. But setting up a home network can be more satisfying that relying on WiFi alone from a two-rate modem/router/wireless access point combo you WAY overpay for on a monthly basis. With just a little knowledge, you can double your speed and enhance your entertainment and web surfing experience. All this without a tech degree from an IBM school.

Here are the basics.

First, you need a modem. That converts either your cable service or your phone service into internet and brings it into your home. That's the demarcation point -- or the point where its on you and not your internet service provider (ISP). But don't be afraid, you can do better than Comcast! In early 2014, for the home-owner, I would call this the best modem on the market:
If you buy an modem like this you will need to, sadly, call your ISP to get it activated by them. Once that happens, you can hook up a router. You may need to change the default IP of the router as they can be the same as the modem at times. Its easy though and usually in the same area you would set up your passwords. Just change the third set of numbers...for example, your modem will be so make your router No problem. If you want to understand what an IP is, look to YouTube. I recommend my favorite tech how-to channel later on.

Many modems have a built in router. A router translates information between different devices in your network. I recommend buying a one that is independent of your modem. Routers come in many "flavors." If you want Apple gear, they make some pretty nice routers -- even with built in back up capacity or with the ability to hook up external hard drives. A quick amazon search will give you some good suggestions.

Next, a switch. Wait, aren't there switch ports on the back of my router and why would I look into anything but wireless? Well, yes your router likely has a switch built into it (the Ethernet ports on the back), but you can EXPAND your capacity with additional switches that are fed from those ports. Why would you do that -- because you can get up to 1 gigabit of connection speed, per port, which you cannot get with wireless. Its true, some wireless is faster than wired, but not once multiple devices connect to it and certainly not within the home owner's budget. If you want to know how to run cable and hook up wall jacks throughout your home, I suggest you look to Eli the Computer Guy on YouTube for some wonderful tutorials on the subject. Not to mention that wired connections are more dependable than wireless. I put together a home network running wired connections to each room in my house and now enjoy tremendous streaming speeds between devices like my Mac and Apple TV. Watching a movie is now even simpler than popping in a DVD!
          IBM Domino 9 PDF Export Tool 1.0        
IBM Domino 9 PDF Export Tool to export Lotus Domino DXL Database.
          IBM releases Watson Machine Learning for a general audience        

Not content with beating humans at quiz shows, IBM is moving forward with its Watson Machine Learning service. Now generally available after a year’s worth of beta testing, WML promises to address the needs of both data scientists and devs.

The post IBM releases Watson Machine Learning for a general audience appeared first on JAXenter.

          Comment on Lionbridge Adds GeoFluent Real Time Multilingual Translation to IBM Sametime Instant Messaging! by Product abbreviations | Superprotonics        
[...] Lionbridge Adds GeoFluent Real Time Multilingual Translation to … [...]
          Machinima - Standby        
As a result of being nominated for two AVI choice award for Favorite artist and Favorite machinima artist I will be posting a machinima every few days.

 Standby is the culmination of three large works into one story.  Back in 2009 or so I was commissioned to create something for a region called Black Swan created by Starax/Light waves for
Black Swan
Rezzable.  It was an open request to make something, so I would spend some time there to try to absorb what the sim "felt" like and what might fit within it rather than clash.   The region had two giant figures embracing in the centre with a ring on the outside with various sculptures.  It felt to me as though it almost had a story to it, though in fact it didn't.  What I decided to do was to create a tower in the center that told a story as you climbed it.  It was full of traps and quite difficult but that was part of an idea I had which was that if you made something challenging to the viewer it would be more rewarding in the end than to just give the work to the viewer to stand back and observe from a distance.   I wanted them to be active participant to the artwork rather than a passive observer.  Black Swan was a beloved sim to many at that time in SL and I got a few nasty messages and even some demands to remove it from upset people.  I recall someone writing in a review at the time that they didn't want to "do work" to see art.  I remember reading that and deciding I would create for the virtual space, for the explorer types and not the snowflakes who brought the elitist art baggage from the real world into the virtual space, hoping the artist would bring them the art on a silver platter for them to observe at their leisure.  There are those who role play art and feel that real art must look like what we leave behind in real life.  The paintings or static sculpture that look like what you can find in real life.  Then there are those who understand the unique traits of the virtual space and realize the art created here can defy all the aesthetic and conceptual rules that exist in the "real" world.  In fact this spurred a successful group called NPIRL or Not Possible in Real Life
     At this time I had the idea to create a narrative which would be told in installments over many years.  In fact the new work "Hand" which should open in a few weeks is a continuation of this original story as has been every work with the exception of Virginia Alone.  Each story resides in the same world and act as layers to an overall narrative.  The work for Black Swan was called the Daughter of Gears and after the exhibition IBM requested to have it shown on their regions.  The following two installments, the Rabbicorn story and then Standy were at IBM.  I used this work to apply for a Government Grant at the Ontario Arts Council and it was the first of three successful art grants to be awarded to me thus far.  Standby is the title of the three stories combined into one machinima and it was my first extended machinima running at around 30 minutes long.

          IBM Watson Helps Grammy-Winning Producer Craft An EP        
The computer system's data technology generated musical scores for Alex Da Kid's first solo project
          Politieacademie stelt grote hoeveelheid kennis beschikbaar via Office SharePoint Server         
Hein van der Schoot, manager Kennisveredeling directie Onderzoek Kennis & Ontwikkeling: "We zijn continu bezig een verbinding te leggen tussen wetenschap/politiekunde en praktijk. Vragen die uit de praktijk voortkomen vormen vaak de inzet van onderzoek. In de loop der jaren is zo een kennisbank ontstaan met daarin zo'n 15.000 kennispagina's. In feite een database vol praktische en politiekundige kennis met ervaring van politiemensen, juridische informatie, kennisgerelateerde applicaties en andere informatie. De waarde van de database is groot, temeer daar de politie de opgeslagen kennis gebruikt bij haar dagelijkse werkzaamheden." De oude kennisbank voldeed echter niet meer. Hein van der Schoot: "Een sterk veranderende samenleving stelt steeds hogere eisen aan de kennisontwikkeling binnen de politieorganisatie. Het oude systeem schoot tekort op het gebied van kennismanagement en kennisdeling, terwijl er binnen de Politieacademie juist steeds meer behoefte ontstond aan een portalomgeving voor samenwerking en mogelijkheden om met reeds verworven én nieuwe kennis om te gaan." "Eerst werd onderzocht of de markt een Content Management Systeem bood dat ons kennismodel ondersteunt. Het bleek al snel dat een CMS te beperkt is en dat voor een rijk platform gekozen moest worden, dat niet alleen het kennismodel kan ondersteunen, maar de gehele keten van kenniscreatie. Er is gekeken naar de platformen van Oracle, IBM en Microsoft. De gedachte achter Microsoft Office Sharepoint Server 2007 paste perfect. Veel van de door ons verlangde functionaliteiten haal je standaard 'out of the box', terwijl je zelf gemakkelijk nog allerlei functionaliteiten kunt toevoegen." Om de grote hoeveelheid kennis snel en gemakkelijk toegankelijk te maken voor alle medewerkers, is de tamelijk rigide kennisdatabase inmiddels met Microsoft SharePoint Server omgebouwd tot een flexibele portalomgeving, het PolitieKennisNet. Resultaat: versnelde processen met betrekking tot het verzamelen, vastleggen en delen van politiekennis, interactieve mogelijkheden en betere koppelingen met andere systemen.
          Consultant - Identity & Access Management - Simeio Solutions - Atlanta, GA        
Privileged Access Management (CyberArk, Lieberman Software, BeyondTrust, CA Exceedium, IBM SPIM, Dell TPAM, etc). Consultant - Identity &amp; Access Management....
From Simeio Solutions - Wed, 09 Aug 2017 06:06:17 GMT - View all Atlanta, GA jobs
          RAISE Act may result in cost rise for IT firms - Business Standard: News Now        
As the US President Donald Trump has backed the Reforming American Immigration for a Strong Economy (RAISE) Act, a reform for green card application; IT services firms likely to see rise in a cost as they may be pushed to hire high-skilled professionals locally. If implemented, RAISE Act can also potentially increase the cost of employing an Indian software engineer in the US for both Indian and their global counterparts such as Accenture, IBM and others. This Act, which aims to change the existing lottery system for issuance of green card (citizenship) into a point-based one, has proposed that priorities would be given to English speaking ability, highly-paid job, or a doctorate from any US university and other categories. For example, as reported by BBC News, an individual can get 13 points (out of the total 30) if that person has a US doctorate; while a US or foreign high-school diploma degree will only get one point. "It (RAISE Act), if implemented, will further force IT services .
          Max MQTT connections

I have a need to create a server farm that can handle 5+ million connections, 5+ million topics (one per client), process 300k messages/sec.

I tried to see what various message brokers were capable so I am currently using two RHEL EC2 instances (r3.4xlarge) to make lots of available resources. So you do not need to look it up, it has 16vCPU, 122GB RAM. I am nowhere near that limit in usage.

I am unable to pass the 600k connections limit. Since there doesn't seem to be any O/S limitation (plenty of RAM/CPU/etc.) on either the client nor the server what is limiting me?

I have edited /etc/security/limits.conf as follows:

* soft  nofile  20000000 * hard  nofile  20000000  * soft  nproc  20000000 * hard  nproc  20000000  root  soft  nofile 20000000 root  hard  nofile 20000000 

I have edited /etc/sysctl.conf as follows:

net.ipv4.ip_local_port_range = 1024 65535   net.ipv4.tcp_tw_reuse = 1  net.ipv4.tcp_mem = 5242880  5242880 5242880  net.ipv4.tcp_tw_recycle = 1  fs.file-max = 20000000  fs.nr_open = 20000000  net.ipv4.tcp_syncookies = 0  net.ipv4.tcp_max_syn_backlog = 10000  net.ipv4.tcp_synack_retries = 3  net.core.somaxconn=65536  net.core.netdev_max_backlog=100000  net.core.optmem_max = 20480000 

For Apollo: export APOLLO_ULIMIT=20000000

For ActiveMQ:

ACTIVEMQ_OPTS="$ACTIVEMQ_OPTS -Dorg.apache.activemq.UseDedicatedTaskRunner=false" ACTIVEMQ_OPTS_MEMORY="-Xms50G -Xmx115G" 

I created 20 additional private addresses for eth0 on the client, then assigned them: ip addr add dev eth0

I am FULLY aware of the 65k port limits which is why I did the above.

  • For ActiveMQ I got to: 574309
  • For Apollo I got to: 592891
  • For Rabbit I got to 90k but logging was awful and couldn't figure out what to do to go higher although I know its possible.
  • For Hive I got to trial limit of 1000. Awaiting a license
  • IBM wants to trade the cost of my house to use them - nah!
asked Mar 30 '15 at 23:52
Can't really tell how to increase the throughput. However, checkout . Not sure about the MQTT support, but it seems capable of extrem throughput / # clients. – Petter Nordlander Mar 31 '15 at 7:52
did you try mosquitto? ( – Aleksey Izmailov Apr 2 '15 at 8:02
Trying Hive, Apollo, Mosquito, Active, Rabbit, mosquito – redboy Apr 2 '15 at 21:58

ANSWER: While doing this I realized that I had a misspelling in my client setting within /etc/sysctl.conf file for: net.ipv4.ip_local_port_range

I am now able to connect 956,591 MQTT clients to my Apollo server in 188sec.

More info: Trying to isolate if this is an O/S connection limitation or a Broker, I decided to write a simple Client/Server.

The server:

    Socket client = null;     server = new ServerSocket(1884);     while (true) {         client = server.accept();         clients.add(client);     } 

The Client:

    while (true) {         InetAddress clientIPToBindTo = getNextClientVIP();         Socket client = new Socket(hostname, 1884, clientIPToBindTo, 0);         clients.add(client);     } 

With 21 IPs, I would expect 65535-1024*21 = 1354731 to be the boundary. In reality I am able to achieve 1231734

[root@ip ec2-user]# cat /proc/net/sockstat sockets: used 1231734 TCP: inuse 5 orphan 0 tw 0 alloc 1231307 mem 2 UDP: inuse 4 mem 1 UDPLITE: inuse 0 RAW: inuse 0 FRAG: inuse 0 memory 0 

So the socket/kernel/io stuff is worked out.

I am STILL unable to achieve this using any broker.

Again just after my client/server test this is the kernel settings.


[root@ip ec2-user]# sysctl -p net.ipv4.ip_local_port_range = 1024     65535 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_mem = 5242880      5242880 15242880 net.ipv4.tcp_tw_recycle = 1 fs.file-max = 20000000 fs.nr_open = 20000000  [root@ip ec2-user]# cat /etc/security/limits.conf * soft  nofile  2000000 * hard  nofile  2000000     root  soft  nofile 2000000 root  hard  nofile 2000000 


[root@ ec2-user]# sysctl -p net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_mem = 5242880      5242880 5242880 net.ipv4.tcp_tw_recycle = 1 fs.file-max = 20000000 fs.nr_open = 20000000 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_syn_backlog = 1000000 net.ipv4.tcp_synack_retries = 3 net.core.somaxconn = 65535 net.core.netdev_max_backlog = 1000000 net.core.optmem_max = 20480000 

SIMONE 2016-06-01 16:15 发表评论

          Unsupervised Learning: No. 71        

This week’s topics: Half of Android devices haven’t been patched in over a year, Tavisclosure, NEST camera flaws, senate vs. privacy, electronics ban, bad Let’s Encrypt certs, Moodle SQLi, infosec venture capital drying up, IBM employees heading into the office, Twitter going paid model, Google killing Talk, Quiet spaces, Age of the influencer, AI vs. jobs, tools, aphorisms,...


I do a weekly show called Unsupervised Learning, where I curate the most interesting stories in infosec, technology, and humans, and talk about why they matter. You can subscribe here.

          Column: Why you no longer need a venture capitalist to start a successful business        
People gather and talk at Sightglass Coffee in the South of Market (SoMA) neighborhood in San Francisco, California January 14, 2015. Visitors can get a taste of the booming startup scene in San Francisco, home to thousands of technology businesses, particularly in neighborhoods that are popular with tech workers like the fast-gentrifying SoMA. Jack Dorsey, the co-founder of Twitter and Square, recently made an investment in Sightglass Coffee - some believe for the opportunity to observe digitally savvy San Franciscans in their most natural habitat. Picture taken January 14, 2015. Photo by Robert Galbraith/Reuters

If I can use a credit card to start a business that will quickly grow to be dominant, why do I need a venture capitalist? asks Jerry Davis, author of the new book, “The Vanishing American Corporation.” Photo by Robert Galbraith/Reuters

Editor’s Note: This is the fourth in a series of excerpts we are publishing from sociologist Jerry Davis’s new book, “The Vanishing American Corporation: Navigating the Hazards of a New Economy.” For more on the topic, watch last week’s Making Sen$e report below.

— Kristen Doerer, Making Sen$e Editor

Nike demonstrated that the value of sneakers is in the design and the brand, not in the actual physical production or distribution of the shoes. Design and execution can be entirely separated, and consumers do not seem to be bothered by it. The value is in the intellectual property; goods themselves are fungible. This model spread far beyond the garment industry to include computers and electronics, pharmaceuticals, pet food and almost anything else you can buy in the U.S. Nearly everyone is aware that the iPhone, the leading product of our age, is assembled by employees of Foxconn, not by Apple. Aside from occasional concerns about human rights abuses, however, consumers and investors are untroubled by this.

If I can use a credit card to start a business that will quickly grow to be dominant, why do I need a venture capitalist?

Dell demonstrated that even the design is not always especially important if the price is low enough. Why pay extra for an IBM label when a Dell is just as good, customizable and a lot less expensive? Vizio took the Dell idea a step further. The designs are thoroughly generic, and there is no customization. But they are much, much less expensive than the name brands like Sony. Unlike Sony, Vizio has none of the baggage (and costs) of being a social institution. And when flat-screen TVs are replaced by implantable 3D virtual reality brainpods, Vizio will disappear with a minimum of fuss and tears, to be replaced by a new generic implantable brainpod vendor.

There is something of an irony in the fact that the shareholder-driven outsourcings of the 1990s and 2000s created the infrastructure of generic manufacturing, distribution, business services and computing power to render the shareholder-owned corporation obsolete. The restructurings of the 1990s were almost inevitably accompanied by a nod to shareholder value, as at the food company Sara Lee. The spread of the virtual corporation model was a boon for generic plug-and-play vendors who could assemble products and manage supply chains, ship goods to consumers and provide various business services. But once all of these components were available off-the-shelf — not just the physical components, but all the processes needed to do business — it became much easier for anyone to be the next Michael Dell. The economies of scale that made corporations indispensable in the 20th century had now shifted against them.

READ MORE: Column: How lightweight enterprises are outperforming industry heavyweights

Surprisingly enough, this often came at the expense of the investor class who had helped make it happen in the first place. If I can use a credit card to start a business that will quickly grow to be dominant, why do I need a venture capitalist? A 2013 article in The New Yorker described how the cost of starting up a new venture had collapsed due to the ready availability of plug-in resources.

Once, an entrepreneur would go to a venture capitalist for an initial five-million-dollar funding round-money that was necessary for hardware costs, software costs, marketing, distribution, customer service, sales, and so on. Now there are online alternatives. ‘In 2005, the whole thing exploded,’ [an informant] told me. ‘Hardware? No, now you just put it on Amazon or Rackspace. Software? It’s all open-source. Distribution? It’s the App Store, it’s Facebook. Customer service? It’s Twitter–just respond to your best customers on Twitter and Get Satisfaction. Sales and marketing? It’s Google AdWords, AdSense. So the cost to build and launch a product went from five million…to one million…to five hundred thousand…and it’s now to fifty thousand.’

It is not hard to predict that this cost structure will continue to decline, and it is not just for app startups. Capital equipment has also dropped dramatically in cost, due in large part to CNC (computer numerical control) technology, which acts as the brains of machine tools. A Shopbot router, which could cut plans for much of the furniture in the Ikea catalogue, costs far less than a year of tuition at a private college, and a portable version costs not much more than a laptop. Indeed, outfitting a machine shop can cost far less than sending a kid to college these days. But there is no need to actually purchase or rent the equipment because membership in Techshop or other similar makerspaces allows makers to use high-end precision equipment for the cost of a gym membership. With easy access to open-source designs, anyone who can assemble Ikea furniture can make it themselves, using their own materials.

READ MORE: Column: When corporations were a source of greater equality

The post Column: Why you no longer need a venture capitalist to start a successful business appeared first on PBS NewsHour.

          Delivery Associate Partner - IBM - Canada        
We are seeking an Associate Partner who will contribute significantly to the aggressive growth objectives of the team. IBM Global Business Services:....
From IBM - Mon, 17 Jul 2017 21:06:15 GMT - View all Canada jobs
          Associate Partner – Public Sector (Ottawa or Toronto location) - IBM - Canada        
We are seeking an Associate Partner who will contribute significantly to the aggressive growth objectives of the team. IBM Global Business Services:....
From IBM - Thu, 06 Jul 2017 15:23:55 GMT - View all Canada jobs
          Java heap dump触发和分析        

为了分析java应用的内存泄漏,使用thread dump往往解决不了问题。使用jstat【eg:jstat -gcutil pid 1000 5】工具查看运行的java应用的heap size,perm size ,survivor ratio等,当时你无法知道是什么对象把堆填满了。

     什么是 Java heap dump

      首先需要搞懂什么是java heap,java heap是分配给实例类和数组对象运行数据区,所有java线程在运行期间共享heap中的数据。Java heap dump相当于java应用在运行的时候在某个时间点上打了个快照(snapshot)。


     触发 Java heap dump

有以下方法出发heap dump

  1. 使用$JAVA_HOME/bin/jmap -dump来触发,eg:jmap -dump:format=b,file=/home/longhao/heamdump.out <pid>
  2. 使用$JAVA_HOME/bin/jcosole中的MBean,到MBean>>HotSpotDiagnostic>操作>dumpHeap中,点击 dumpHeap按钮。生成的dump文件在java应用的根目录下面。
  3. 在应用启动时配置相关的参数 -XX:+HeapDumpOnOutOfMemoryError,当应用抛出OutOfMemoryError时生成dump文件。
  4. 使用hprof。启动虚拟机加入-Xrunhprof:head=site,会生成java.hprof.txt文件。该配置会导致jvm运行非常的慢,不适合生产环境。

     分析 Java heap dump

     1:使用IBM HeapAnalyzer

    IBM HeapAnalyzer是一款免费的JVM内存堆的图形分析工具,它可以有效的列举堆的内存使用状况,帮助分析Java内存泄漏的原因。

    下载解压后有一个ha413.jar,执行: java -Xmx512m -jar ha413.jar /home/longhao/heapdump.out




    jhat(Java Head Analyse Tool )是用来分析java堆的命令,可以将堆中的对象以html的形式显示出来,包括对象的数量,大小等等,并支持对象查询语言OQL,分析相关的应用后,可以通过http://localhost:7000来访问分析结果。

    示例: $JAVA_HOME/bin/jhat -J-Xmx512m /home/longhao/dump.out

     3:Eclipse MemoryAnalyzer

    Eclipse Memory Analyzer是一个快速并且功能强大的Java heap分析器,能够帮助你查找内存泄漏和减少内存消耗。在File>Acquire Heap Dump>configure>HPROF jmap dump provider设置一下分析应用的JDK,点击相关应用列表来生成heap dump并分析。



草儿 2011-10-04 22:31 发表评论

          Michelle Killebrew’s TEDx Talk: How Social Technology Can Make Us More Human        
Michelle Killebrew works for for IBM Social Business, where her team focuses on messaging and solutions that define social business and demonstrate how organizations can embrace this next information revolution in the workforce. In this TEDxUniversityofNevada 2015 talk, she discusses how technology, especially social technologies like Twitter and Facebook, can make us more human. Through our power […]
Originally Published 2004-03-16 06:41:08

Just arrived in Warwick, UK, at the IBM Data Centre for the mPharma project (company unnamed). Working with TP.

This place is totally locked down. We're terminal serviced to one box, then TS'ing from there to the actual mbiz and mpharma servers. Unbelievable. The actual real-world connections from device to Siebel/Oracle is unreal... we're talking like 6 hops, a VPN connection, and a firewall between the two.

Having access rights issues with c:\winnt\microsoft.NET\ from the mpharma application. Tried adding localhost\everyone and localhost\aspusr with full control rights; no dice.

Will keep playing...
          IBM amplía su oferta SOA        
Enmarcado en su estrategia on demand y con el fin, según Juan Castillo, director de ventas de WebSphere en IBM para España, Portugal, Grecia, Turquía e Israel, "de proporcionar a las empresas de la ...
          IBM apuesta fuerte por Lotus/Notes Domino con la versión 7.0        
La plataforma de mensajería y colaboración Lotus Notes/Domino de IBM pretende convertirse con el lanzamiento de la versión 7.0 en una de las apuestas de crecimiento de la ...
          Transformation numérique d’ Etihad Airways grâce à IBM        

La compagnie américaine, IBM spécialisée dans le matériel, les logiciels et les services informatiques, va fournir pour les dix prochaines années, les équipements nécéssaires à la transformation numérique de la compagnie Etihad Airways. En effet, Etihad Airways veut développer ses infrastructures et services de sécurité grâce à des plateformes hébergées en mode Cloud, améliorer l’expérience […]

Cet article Transformation numérique d’ Etihad Airways grâce à IBM est apparu en premier sur TechOfAfrica.

          ç”± 重庆信用卡代还 發表的 154TB 不夠用?IBM 與 Fujifilm 發表具 220TB 容量的磁帶儲存技術 迴響        
<strong>重庆信用卡代还</strong> 知识不等于才能。仅有知识而没有才能的人是背着很多书本的驴子。
          ç”± TechNews 科技早報 – 20170310 | TechNews 科技新報 發表的 IBM 研發原子級儲存技術,打造「奈米硬碟」 迴響        
[…] IBM 研發原子級儲存技術,打造「奈米硬碟」 資料儲存技術日新月異,同樣大小硬碟能儲存的資料越來越多,只是時間上的問題而已,人們似乎早就習以為常。但日前 IBM 發表一項最新進展,成果可謂相當驚人。研究人員藉由將原子轉化為世界… […]
          FU AC – nieuwe inzichten        

Die is uitgeschakeld. Zo’n gevalletje meedenkende hardware met software.  Opzouten. Er was hier al gevraagd om meer FUAC’s te sturen maar die zijn er nog niet veel, behouden eentje waar mijn antwoord ‘Tuurlijk’ door de iphone werd omgezet in ‘Tuur IBM test’ of een standaard ‘Thx’ in ‘Ghz‘.

Het eind resultaat was toen ik . . . → Lees verder: FU AC – nieuwe inzichten

BIOS, singkatan dari Basic Input Output System, dalam sistem komputer IBM PC atau kompatibelnya (komputer yang berbasis keluarga prosesor Intel x86) merujuk kepada kumpulan rutin perangkat lunak yang mampu melakukan hal-hal berikut:
  1. Inisialisasi (penyalaan) serta pengujian terhadap perangkat keras (dalam proses yang disebut dengan Power On Self Test, POST)
  2. Memuat dan menjalankan sistem operasi
  3. Mengatur beberapa konfigurasi dasar dalam komputer (tanggal, waktu, konfigurasi media penyimpanan, konfigurasi proses booting, kinerja, serta kestabilan komputer)
  4. Membantu sistem operasi dan aplikasi dalam proses pengaturan perangkat keras dengan menggunakan BIOS Runtime Services.
BIOS menyediakan antarmuka komunikasi tingkat rendah, dan dapat mengendalikan banyak jenis perangkat keras (seperti keyboard). Karena kedekatannya dengan perangkat keras, BIOS umumnya dibuat dengan menggunakan bahasa rakitan (assembly) yang digunakan oleh mesin yang bersangkutan.
Istilah BIOS pertama kali muncul dalam sistem operasi CP/M, yang merupakan bagian dari CP/M yang dimuat pada saat proses booting dimulai yang berhadapan secara langsung dengan perangkat keras (beberapa mesin yang menjalankan CP/M memiliki boot loader sederhana dalam ROM). Kebanyakan versi DOS memiliki sebuah berkas yang disebut "IBMBIO.COM" (IBM PC-DOS) atau "IO.SYS" (MS-DOS) yang berfungsi sama seperti halnya CP/M disk BIOS.
Kata BIOS juga dapat diartikan sebagai "kehidupan" dalam tulisan Yunani (Bios).

Dalam BIOS, terdapat beberapa komponen dasar, yakni sebagai berikut:
Contoh dari CMOS Setup (Phoenix BIOS)
  • Program BIOS Setup yang memungkinkan pengguna untuk mengubah konfigurasi komputer (tipe harddisk, disk drive, manajemen daya listrik, kinerja komputer, dll) sesuai keinginan. BIOS menyembunyikan detail-detail cara pengaksesan perangkat keras yang cukup rumit apabila dilakukan secara langsung.
  • Driver untuk perangkat-perangkat keras dasar, seperti video adapter, perangkat input, prosesor, dan beberapa perangkat lainnya untuk sistem operasi dasar 16-bit (dalam hal ini adalah keluarga DOS).
  • Program bootstraper utama yang memungkinkan komputer dapat melakukan proses booting ke dalam sistem operasi yang terpasang.


BIOS juga sering disebut sebagai ROM BIOS karena pada awalnya BIOS disimpan dalam chip memori hanya baca (ROM) dalam motherboard. Mengapa disimpan di dalam ROM, adalah agar BIOS dapat dieksekusi pada waktu komputer dinyalakan, tanpa harus menunggu untuk menyalakan perangkat media penyipanan terlebih dahulu (yang memakan waktu lama). BIOS dalam komputer PC modern disimpan dalam chip ROM yang dapat ditulisi ulang secara elektrik atau Flash ROM. Oleh sebab itu, sekarang sebutan Flash BIOS lebih populer dibandingkan dengan ROM BIOS. Berikut ini adalah beberapa chip ROM yang digunakan sebagai tempat penyimpanan BIOS.
Tipe ROM Cara penulisan Dapat dihapus Jenis BIOS
Mask ROM Photolithography Tidak ROM BIOS
Programmable ROM (PROM) PROM Writer Tidak ROM BIOS
Erasable PROM EPROM/PROM Writer Ya, dengan menggunakan EPROM Rewriter atau menyinarinya dengan sinar ultraviolet tepat pada lubang kuarsa bening. ROM BIOS
Electricly EPROM EEPROM/EPROM/PROM Writer Ya, dengan menggunakan EEPROM Rewriter, atau secara langsung secara elektrik dari papan sirkuit dengan menggunakan perangkat lunak EEPROM Programmer. ROM BIOS
Flash ROM EEPROM Writer atau software yang dapat menulisi Flash ROM Ya, dengan menggunakan EEPROM Writer, atau langsung secara elektrik dari papan sirkuit dengan menggunakan perangkat lunak Flash BIOS Programmer. Flash BIOS
Tampilan yang dikeluarkan oleh BIOS saat NVRAM mengalami kerusakan atau saat baterai litium CR-2032 habis dayanya atau dicabut dari slotnya
Meskipun BIOS disimpan dalam memori hanya baca, konfigurasi BIOS tidak disimpan dalam ROM, (hal ini disebabkan oleh sifat ROM yang statis) melainkan sebuah chip terpisah yang disebut sebagai Real-time clock (RTC), yang berupa sebuah Non-Volatile Random Access Memory (NVRAM). NVRAM juga sering disebut sebagai Complimentary Metal-Oxide Random Access Memory (CMOS RAM), karena menggunakan metode pembuatan CMOS. Karena menggunakan metode pembuatan CMOS, NVRAM membutuhkan daya yang sangat kecil agar dapat bekerja. Meskipun disebut non-volatile, NVRAM sebenarnya merupakan sebuah chip yang volatile, sehingga data yang tersimpan di dalamnya dapat terhapus dengan mudah jika daya listrik yang menghidupinya terputus. Oleh karena itu, NVRAM "dihidupi" oleh sebuah baterai (mirip baterai kalkulator atau jam) dengan bahan Litium dengan seri CR-2032. Sebuah baterai Litium CR-2032 dapat menghidupi NVRAM selama tiga hingga lima tahun. Jika daya dalam baterai habis, atau daya yang disuplainya terputus (akibat dicabut dari slotnya), maka semua konfigurasi akan dikembalikan ke kondisi standar, sesuai ketika BIOS tersebut diprogram oleh pabrikan. BIOS umumnya memberikan laporan CMOS Checksum Error atau NVRAM Checksum Error.
=BIOS, singkatan dari Basic Input Output System, dalam sistem komputer IBM PC atau kompatibelnya (komputer yang berbasis keluarga prosesor Intel x86) merujuk kepada kumpulan rutin perangkat lunak yang mampu melakukan hal-hal berikut:
1. Inisialisasi (penyalaan) serta pengujian terhadap perangkat keras (dalam proses yang disebut dengan Power On Self Test, POST)
2. Memuat dan menjalankan sistem operasi
3. Mengatur beberapa konfigurasi dasar dalam komputer (tanggal, waktu, konfigurasi media penyimpanan, konfigurasi proses
booting, kinerja, serta kestabilan komputer)
4. Membantu sistem operasi dan aplikasi dalam proses pengaturan perangkat keras dengan menggunakan BIOS Runtime Services.
BIOS menyediakan antarmuka komunikasi tingkat rendah, dan dapat mengendalikan banyak jenis perangkat keras (seperti keyboard). Karena kedekatannya dengan perangkat keras, BIOS umumnya dibuat dengan menggunakan bahasa rakitan (assembly) yang digunakan oleh mesin yang bersangkutan.
Istilah BIOS pertama kali muncul dalam sistem operasi CP/M, yang merupakan bagian dari CP/M yang dimuat pada saat proses booting dimulai yang berhadapan secara langsung dengan perangkat keras (beberapa mesin yang menjalankan CP/M memiliki boot loader sederhana dalam ROM). Kebanyakan versi DOS memiliki sebuah berkas yang disebut "IBMBIO.COM" (IBM PC-DOS) atau "IO.SYS" (MS-DOS) yang berfungsi sama seperti halnya CP/M disk BIOS.
Kata BIOS juga dapat diartikan sebagai "kehidupan" dalam tulisan Yunani (Βίος).

Komponen BIOS
Dalam BIOS, terdapat beberapa komponen dasar, yakni sebagai berikut: Contoh dari CMOS Setup (Phoenix BIOS)
* Program BIOS Setup yang memungkinkan pengguna untuk mengubah konfigurasi komputer (tipe harddisk, disk drive, manajemen daya
listrik, kinerja komputer, dll) sesuai keinginan. BIOS menyembunyikan detail-detail cara pengaksesan perangkat keras yang
cukup rumit apabila dilakukan secara langsung.
* Driver untuk perangkat-perangkat keras dasar, seperti video adapter, perangkat input, prosesor, dan beberapa perangkat
lainnya untuk sistem operasi dasar 16-bit (dalam hal ini adalah keluarga DOS).
* Program bootstraper utama yang memungkinkan komputer dapat melakukan proses booting ke dalam sistem operasi yang terpasang.


BIOS juga sering disebut sebagai ROM BIOS karena pada awalnya BIOS disimpan dalam chip memori hanya baca (ROM) dalam motherboard. Mengapa disimpan di dalam ROM, adalah agar BIOS dapat dieksekusi pada waktu komputer dinyalakan, tanpa harus menunggu untuk menyalakan perangkat media penyipanan terlebih dahulu (yang memakan waktu lama). BIOS dalam komputer PC modern disimpan dalam chip ROM yang dapat ditulisi ulang secara elektrik atau Flash ROM. Karena itulah, sekarang sebutan Flash BIOS lebih populer dibandingkan dengan ROM BIOS. Berikut ini adalah beberapa chip ROM yang digunakan sebagai tempat penyimpanan BIOS. Tipe ROM Cara penulisan Dapat dihapus Jenis BIOS Mask ROM Photolithography Tidak ROM BIOS Programmable ROM (PROM) PROM Writer Tidak ROM BIOS Erasable PROM EPROM/PROM Writer Ya, dengan menggunakan EPROM Rewriter atau menyinarinya dengan sinar ultraviolet tepat pada lubang kuarsa bening. ROM BIOS Electricly EPROM EEPROM/EPROM/PROM Writer Ya, dengan menggunakan EEPROM Rewriter, atau secara langsung secara elektrik dari papan sirkuit dengan menggunakan perangkat lunak EEPROM Programmer. ROM BIOS Flash ROM EEPROM Writer atau software yang dapat menulisi Flash ROM Ya, dengan menggunakan EEPROM Writer, atau langsung secara elektrik dari papan sirkuit dengan menggunakan perangkat lunak Flash BIOS Programmer. Flash BIOS Tampilan yang dikeluarkan oleh BIOS saat NVRAM mengalami kerusakan atau saat baterai litium CR-2032 habis dayanya atau dicabut dari slotnya

Meskipun BIOS disimpan dalam memori hanya baca, konfigurasi BIOS tidak disimpan dalam ROM, (hal ini disebabkan oleh sifat ROM yang statis) melainkan sebuah chip terpisah yang disebut sebagai Real-time clock (RTC), yang berupa sebuah Non-Volatile Random Access Memory (NVRAM). NVRAM juga sering disebut sebagai Complimentary Metal-Oxide Random Access Memory (CMOS RAM), karena menggunakan metode pembuatan CMOS. Karena menggunakan metode pembuatan CMOS, NVRAM membutuhkan daya yang sangat kecil agar dapat bekerja. Meskipun disebut non-volatile, NVRAM sebenarnya merupakan sebuah chip yang volatile, sehingga data yang tersimpan di dalamnya dapat terhapus dengan mudah jika daya listrik yang menghidupinya terputus. Oleh karena itu, NVRAM "dihidupi" oleh sebuah baterai (mirip baterai kalkulator atau jam) dengan bahan Litium dengan seri CR-2032. Sebuah baterai Litium CR-2032 dapat menghidupi NVRAM selama tiga hingga lima tahun. Jika daya dalam baterai habis, atau daya yang disuplainya terputus (akibat dicabut dari slotnya), maka semua konfigurasi akan dikembalikan ke kondisi standar, sesuai ketika BIOS tersebut diprogram oleh pabrikan. BIOS umumnya memberikan laporan CMOS Checksum Error atau NVRAM Checksum Error.

Update BIOS

BIOS kadang-kadang juga disebut sebagai firmware karena merupakan sebuah perangkat lunak yang disimpan dalam media penyimpanan yang bersifat hanya-baca. Hal ini benar adanya, karena memang sebelum tahun 1995, BIOS selalu disimpan dalam media penyimpanan yang tidak dapat diubah. Seiring dengan semakin kompleksnya sebuah sistem komputer , maka BIOS pun kemudian disimpan dalam EEPROM atau Flash memory yang dapat diubah oleh pengguna, sehingga dapat di-upgrade (untuk mendukung prosesor yang baru muncul, adanya bug yang mengganggu kinerja atau alasan lainnya). Meskipun demikian, proses update BIOS yang tidak benar (akibat dieksekusi secara tidak benar atau ada hal yang mengganggu saat proses upgrade dilaksanakan) dapat mengakibatkan motherboard mati mendadak, sehingga komputer pun tidak dapat digunakan karena perangkat yang mampu melakukan proses booting (BIOS) sudah tidak ada atau mengalami kerusakan.
Oleh karena itu, untuk menghindari kerusakan (korupsi) terhadap BIOS, beberapa motherboard memiliki BIOS cadangan . Selain itu, kebanyakan BIOS juga memiliki sebuah region dalam EEPROM/Flash memory yang tidak dapat di-upgrade, yang disebut sebagai "Boot Block". Boot block selalu dieksekusi pertama kali pada saat komputer dinyalakan. Kode ini dapat melakukan verifikasi terhadap BIOS, bahwa kode BIOS keseluruhan masih berada dalam keadaan baik-baik saja (dengan menggunakan metode pengecekan kesalahan seperti checksum, CRC, hash dan lainnya) sebelum mengeksekusi BIOS. Jika boot blockboot block akan meminta pengguna untuk melakukan pemrograman BIOS kembali dengan menggunakan mendeteksi bahwa BIOS ternyata rusak, maka floppy disk yang berisi program flash memory programmerimage BIOS yang sama atau lebih baik. Pembuat motherboard sering merilis update BIOS untuk menambah kemampuan produk mereka atau menghilangkan beberapa bug yang mengganggu. dan

          â€˜Perseverance matters in an entrepreneurial journey’, says CEO of Sanovi Technologies        
  Interview with Chandra Sekhar Pulamarasetti It is a tale of a company that has grown next door to India’s major software exporters. Much to their surprise, IBM acquired Sanovi Technologies, a cloud migration software entity. Chandra Sekhar Pulamarasetti, its founder and chief executive, talks to Ayan Pramanik.   Edited Excerpts:   Tell us your […]
          IBM Closes Acquisition of Sanovi Technologies        
Acquisition of hybrid cloud recovery and business continuity software firm bolsters IBM’s Software Defined Resiliency strategy   ARMONK, NY and BANGALORE, INDIA – 15 November 2016: IBM (NYSE: IBM) today announced it has completed the acquisition of Sanovi Technologies, a privately held company that provides hybrid cloud recovery, cloud migration and business continuity software for […]
          IBM to Acquire Sanovi Technologies to Expand Disaster Recovery Services for Hybrid Cloud        
Move will enhance IBM resiliency capabilities with the help of advanced analytics to meet complexities of hybrid environments ARMONK, NY and BANGALORE, INDIA – 27 Oct 2016: IBM (NYSE: IBM) today announced it has signed a definitive agreement to acquire Sanovi Technologies, a privately held company that provides hybrid cloud recovery, cloud migration and business […]
          Analyst says she disagrees with Warren Buffett on IBM        
Despite Buffett's 30 percent reduction of IBM stock, Kim Forrest said says she's staying in.
          Sony и IBM съдадоха касета с магнитна лента за съхранение на данни с капацитет от 330 TB        
Съхранение на данни върху магнитна лента? Няма грешка! Тази стара технология от зората на изчислителните машини се завръща. През последните години, развитието на интернет, популяризирането на облачните услуги и използването на големи обеми данни...
          Configuration Specialist - Windows 10/IBM Rational        

          Can Google catch Amazon and Microsoft in cloud?        
From a numbers standpoint, Google is actually a distant fourth in the $23 billion cloud infrastructure services market, according to Synergy Research Group. AWS ranks first with 31 percent, followed by Microsoft Azure at 9 percent, IBM at 7 percent and Google Cloud Platform at 4 percent, Synergy data show. That means of Google parent […]
          Fix bad sound on Lenovo ThinkPad [SOLVED]        
If you read my blog you probably know that I am a big ThinkPad fan. Lenovo has continued the classic IBM lineup and now there is a ThinkPad for almost any taste. My current favorite is the Lenovo ThinkPad T470. This 14″ beauty is nearly perfect for work and travel. However, from the factory it does suffer from a (fixable) issue that seems to plague many modern ThinkPads… terrible sound!  Specifically, the sound is extremely echo prone… After listening to
          IBM BPM Designer/Developer        

          Association of Professional Genealogists Hit By Scam - Lessons Learned        
Last week the Association of Professional Genealogists announced it had been targeted by scam artists. The villains were able to impersonate the secretary's email and offered to pay APG members an hourly fee to lobby state legislatures regarding forensic genealogy.  In a further attempt, members received requests to "Support Diane's Brain Cancer Battle."  APG quickly quashed the scam by alerting members and asking them to report any such attempt at fundraising and asking that those affected to notify the organization.  

I have some professional knowledge of cyber security and I have been the target of email cloning and twice had my credit card accounts hacked.  I, therefore, would like to offer a few cautions of my own.  

1. Source. Be cautious of any solicitation via email or social media, especially Facebook. We have all heard about fake news on social media, yet it is hard not to click on that story about the baby with cancer.  Look carefully - is it a story supposedly about someone in a small Missouri town but the link takes you to a website that is not linked to any local, regional or state news source?  Don't be taken in just because it is a sad story or even a happy one!

2. Context.  Does the email read like a normal / regular communication you receive from an organization? Often databases are hacked by groups in foreign countries then they are sold to individual criminals or organizations.  If you closely read the fake email there will be grammatical mistakes or colloquialisms that don't fit.  For example, did a New England genealogical society end their request with "see y'all in the spring!" when you know their annual conference is in the fall and no self-respecting Bostonian would say y'all like we do in the south?  Sometimes it isn't that simple, but if you look you will often see things that just do not fit the norm.  

3. Legitimacy. If any legitimate organization is soliciting funding, take a minute to think about the source and what they are asking.  Would an organization such as APG solicit funding through their work emails for an individual?  The answer is never.  Most companies and non-profit organizations have rules about using their official communication sources for private funding. 

4. Check it out. At the national level any non-profit must register and are held accountable by federal law.  You can check out charity ratings at Charity Watch. For an organization such as a genealogical society, go to their website for information about events and solicitations.  If an organization is undertaking a fundraising campaign, you bet it will be front and center on their website.  Also, you can contact them via phone or mail, but use only phone numbers that you find officially linked to the organization not one provided in the suspect email.

5. Be familiar with the typical scam.  You can check  this US government website that lists common fraud types:

According to an IBM report, the global cost of cybercrime will reach $2 trillion by 2019, a threefold increase from the 2015 estimate of $500 billion. Small, regional and even local organizations are not immune. The IBM report explains, "a staggering 50 percent of small and mid-sized organizations reported suffering at least one cyberattack in the last 12 months."

Your best bet is to be aware, be vigilant of your own finances and social media presence and most importantly when and if you are ready to give to a worthy cause, take the time to do the research and get your hard-earned dollars in needy hands, not those of criminal organizations.

Kathleen W. Hinckley, CG

Executive Director
          Macintosh At 30: Interesting, Profound And Curious Things Said About Apple's 'Insanely Great' Computer        
Video “Hello, I’m Macintosh. It sure is great to get out of that bag.Unaccustomed as I am to public speaking, I’d like to share with you a maxim I thought of the first time I met an IBM mainframe: NEVER TRUST A COMPUTER YOU CAN’T LIFT.” — The [...]
          The Power Visualization System        

The Power Visualization System, lecture by Armando Garcia. This video has been recorded on November, 1992. From University Video Communications’ catalog:

Introduced by Abe Peled, this talk describes the architecture and design of the IBM Power Visualization System (PVS), an integrated high-performance system for interactive simulation and visualiza-tion. The unique capabilities of the system, such as the ability to support interactive visualization and dynamic datasets in a wide range of application areas, are demonstrated using several visualization examples. Lloyd Treinish provides a demonstration.

          Comment on IBM Connect 2013 by Viktor Krantz        
That's for sure! But the content is sexy. ;-)
          Comment on IBM Connect 2013 by Rob Novak        
You've had sexier session titles. :-)
          The Geek Christmas Quiz 2013: The Answers!        

Here are the answers to this year’s Geek Christmas Quiz.

What does this acronym stand for? (one point per correct answer)
    1. AWS – Amazon Web Services
    2. SQL – Structured Query Language
    3. NATO – North Atlantic Treaty Organization
    4. GNU – GNU is Not Unix
    5. SCSI – Small Computer Serial Interface
    6. HTML – Hyper Text Markup Language
    7. HTTP – Hyper Text Transfer Protocol
    8. ReST – Representational State Transfer
    9. NASA – National Aeronautics and Space Administration
    10. RAF – Royal Air Force

Name that computer (one point for the model, another for the company/inventor)

NeXT Cube / NeXT Apple II / Apple Difference Engine / Babbage
IBM PC / IBM Macintosh / Apple ZX Spectrum / Sinclair
System 360 / IBM Colossus / Tommy Flowers PlayStation / Sony

Name that programming language (one point for each correct answer)

  1. BASIC
  2. C#
  3. C
  4. F#
  5. Python
  6. T-SQL
  7. Bash
  8. Lisp/Scheme/Clojure
  9. Haskell

Science fiction (one point for each correct answer)

    1. "These are not the droids you are looking for" (what film?) – Start Wars IV
    2. "Open the pod bay doors please HAL" (what film, or book?) – 2001 A Space Odyssey
    3. Who wrote Rendezvous with Rama? – Arthur C Clarke
    4. What spaceship uses dilithium crystals – USS Enterprise
    5. what does the acronym CREW in Iain Bank's culture novels mean when referring to weapons - Coherent Radiation Emission Weapon
    6. Name 3 futurama characters -
    7. Who wrote the 3 laws of robotics? – Isaac Asimov
    8. What is the first law of robotics? – “A robot may not injure a human being or, through inaction, allow a human being to come to harm”
    9. Who is the leader of the Daleks? - Davros
    10. Directive? (which film?) – Wall-E

Science (one point for each correct answer)

    1. 1 degree fahrenheit is a different size to 1 degree centigrade which means they must cross at some point. At what temperature are both scales the same? - -40.
    2. When and where was the first cell in your body created? - The egg that created you was formed inside of your mother’s fetus while she was inside of your grandmother’s womb.
    3. What is the device which blends air and fuel in an internal combustion engine called? – Carburettor.
    4. What was the name of the first hydrogen (thermonuclear) bomb test? – Mike.
    5. What is special about Sirius, the Dog Star? – It is the brightest star in the night sky.
    6. What year did the last man land on the moon? – 1972 (Apollo 17)
    7. Who invented the jet engine? – Frank Whittle.
    8. In trigonometry what is calculated by dividing the adjacent by the hypotenuse? - Cosine
    9. Which part of the Earth lies between the outer core and the crust? – The mantle.
    10. Where in the body are alveoli to be found? – The lungs.

Name that cartoon character (one point for each correct answer)

Lisa (Simpsons) Iron Man Fone Bone
Denis the Menace Tom & Jerry Kiki (Kiki’s Delivery Service)
Pointy Haired Boss Professor Calculus Dan Dare

          today's leftovers        
  • XWayland Grabs Onto Keyboard Grab Support

    Adding to the list of changes for X.Org Server 1.20 that will be released in the future is grab protocol support for XWayland.

    Last year is when the keyboard grabbing protocol for Wayland was proposed and made it into Wayland-Protocols 1.9. This is about allowing virtual machines, VNC viewers, or XWayland to be able to "grab" all input from a device and send to a particular surface, modeled like a keyboard locking mechanism.

  • Supercomputing by API: Connecting Modern Web Apps to HPC

    In this video from OpenStack Australia, David Perry from the University of Melbourne presents: Supercomputing by API – Connecting Modern Web Apps to HPC.

  • [Video] What’s New in Mageia 6
  • Geeko in the Wild
  • Technical Standards: The Hard Part of Making Everyone Happy

    A recent controversy involving the group that sets the rules of the road for the web is a great reminder of how challenging standards-making really is, even if your standards are the ones everyone is using.

    Standards have a way of bleeding into parts of life that you might not give a second thought to, as a consumer.

    Case in point: Watching a show on Netflix is a pretty satisfying ritual, isn’t it? Lots of people do it. Tens of millions in fact, many of them on their computers, in their web browsers.

  • Apple can’t end lawsuit over “breaking” FaceTime on iPhone 4, judge rules

    Back in February 2017, two Californians sued Apple in a proposed class-action lawsuit over the fact that the company disabled an older version of iOS. Disabling the outdated iOS had the effect of making FaceTime stop working on the customers' iPhone 4 devices.


    "Apple broke FaceTime in order to gain a financial advantage and reduce relay fees," Judge Koh also wrote. "Further, although Apple knew that it had intentionally disabled FaceTime, Apple told consumers that FaceTime had stopped working because of a 'bug resulting from a device certificate that expired.' Apple did not tell users that Apple had intentionally caused the digital certificate to expire prematurely."

  • The complete history of the IBM PC, part two: The DOS empire strikes

    The ethicality or lack thereof of what Paterson did has been debated for years. Gary Kildall stridently claimed many times that he ripped off the actual CP/M source code, but this is a very problematic assertion. There is no evidence that he even had access to the source, which Digital, like most companies then and now, guarded carefully.


    The real victor was Microsoft, which built an empire on the back of a shadily acquired MS-DOS.

          DTNS 3086 – Diminishing Results Management        
IBM makes a 300 TB tape drive, Amazon Echo gets hacked to listen to everything and filmmaker Jon Schiefer tells us his views on DRM, crowdfunding and more. With Tom Merritt, Scott Johnson and Jon Schiefer. MP3 Using a Screen Reader? Click here Multiple versions (ogg, video etc.) from Please SUBSCRIBE HERE. Follow us … Continue reading DTNS 3086 – Diminishing Results Management
          IBM Research Center Explores Deep Psychological Profiling on Twitter        
Researchers at IBM’s Almaden Research Center are looking to create “deep psychological profiles” of potential customers to better understand their values and needs. Dr. Eben Haber and his team are building off previous research by Tal Yarkoni that matched bloggers’ posts to the 5 modern dimensions of personality: extroversion, agreeableness, conscientiousness, neuroticism and openness to […]
          (转)Oracle数据库如何授权收费(Database Licensing)         

说白了,Oracle License就是一张纸,一张许可证。这跟许多软件产品是一样的,有没有许可证的产品在功能上是没有区别的,仅仅是一个法律问题。也就是说,随便到网上下的Oracle都可以免费正常使用,只不过这个使用是有区别的,如果是测试或研发,那没关系,随便用;如果是用于商业用途,那就是违法的了,Oracle公司有权起诉!




现在Oracle有两种授权方式,按CPU(Process)数和按用户数(NamedUser Plus)。前一种方式一般用于用户数不确定或者用户数量很大的情况,典型的如互联网环境,而后一种则通常被用于用户数确定或者较少的情况。


按CPU:License数=CPU数*系数。系数来自Oracle的一个参数表,如IBM Power6的处理器为1,AMD和Intel的处理器为0.5,详细情况见下:




Sun UltraSPARC T1 处理器


Sun UltraSPARC T1处理器




Sun UltraSPARC T2+ 处理器







则根据公式可以算出,一个SUN UltraSparc T1的4*8核处理器需要4*8*0.25=8个CPU licenses


按用户数:Oracle数据库按照用户数授权,是指最终端的连接到Oracle数据库的用户数。按照用户数来买的时候只能用于一个系统,不允许在多台机器上安装。每一个访问Oracle数据库的用户,无论是自然人还是设备,都算作一个用户 (Named User)。如果是B/S架构,那么是指连接到中间件上的用户数。


Named User Plus: is defined as anindividual authorized by you to use the programs which are installed on a singleserver or multiple servers, regardless of whether the individual is activelyusing the programs at any given time. A non human operated device will becounted.





Oracle Database Standard Edition ONE

5 Named User Plus licenses

Oracle Database Standard Edition

5 Named User Plus licenses

Oracle Database Enterprise Edition

25 Named User Plus licenses per CPU

Oracle Application Server Standard Edition ONE

5 Named User Plus licenses

All other Oracle Application Server products

10 Named User Plus licenses per CPU








这里是Oracle 11g企业版的销售价格:

每个License还有有效期的分类(不论是User License还是CPU License),分别为:1年、2年、3年、4年、5年、永久。当然价格也是依次增加。


当前Oracle 11G的User License无限使用期的价格为人民币3千5左右,按50个User License无限使用期的购买量则价格为17.5万;每个CPU License无限使用期的价格为17万9千,按IBM小机的系数计算,则购买价格为17万9千,和50个User License的价格相近。







SQL> select cpu_count_current,CPU_CORE_COUNT_CURRENT,CPU_SOCKET_COUNT_CURRENT from v$license;

----------------- ---------------------- ------------------------
                2                      2                        1
以上通过v$license 视图反应了数据库服务器当前的逻辑CPU总数为2,而总的核数也是2,实际的物理CPU Socket是1,那么说明是1个双核的物理CPU。

b. 如果服务器上尚没有部署实例则不能使用v$license视图,那么可以通过OS 命令来获取必要的信息。

在x86 Linux服务器上:


grep core\ id /proc/cpuinfo | grep -c \ 0$ | grep ^0$ >> /dev/null && grep -c processor /proc/cpuinfo || \
grep core\ id /proc/cpuinfo | grep -c \ 0$


grep "cpu cores" /proc/cpuinfo |uniq



在Power系列的IBM小机上按照cpu模块方式来购买,在IBM Dual-Core Module(双核模块)的power芯片上,一个双核模块(内含2颗物理cpu)只需要购买1.5个license , 具体的模块类型可以咨询IBM厂家或者集成商。



疯狂 2014-10-27 16:43 发表评论

          Re: KOM-TEK Urgent !! Mengembalikan data yang format Gparted        

untuk mengembalikan partisi (partition table) coba pakai testdisk,
testdisk banyak terdapat di livecd linux
saya pernah pakai dan berhasil, kasusnya salah device pas fdisk.
semoga berhasil.

Erianto Simalango wrote:
> Yth,..
> Dengan sangat - sangat saya mohon bantuan untuk menyelamatkan data yang
> terformat. Begini ceritanya...
> Saya punya laptop IBM T40 yang sudah terinstall WIndows XP dan banyak
> aplikasi dan data - data yang sangat penting terutama SKRIPSI (TEKSNYA &
> Sistem Informasi) yang sudah saya kerjakan sedikit - demi sedikit selama
> hampir 1,5 tahun. Ternyata hilang hanya dengan 1 klik.
> Awalnya saya mau install Ubuntu dilaptop saya dengan double OS.
> Setelah saya
> partisi dengan GParted secara live CD saya restart dan kemudian saya
> masukkan CD Ubuntu Feisty. Setelah jalan live CD saya membaca tutor
> yang ada
> diblog Udienz untuk mengsinstall. Dari menu Ubuntu saya klik System -
> Adminstration - Gnome Partition untuk melihat partisi dan paritisi
> tersebut
> tadi belum saya berikan nama (label) jadi masih New Partition #1 - #4
> (boot,
> root,swap dan home). Nah pada saat Gnome Partition sudah buka saya
> klik lah
> menu Device kalo nggak salah kemudian saya klik Label setelah itu ada
> kotak
> dialog saya klik Create dab OK,
> Ternyata nasibku malang malah partisinya hilang semua dan tidak bisa di
> Undo.....
> Bagaimana caranya itu pake live cd juga menyelamatkan atau install dulu
> windows.. saya takut kalau di install sesuatu malah ketimpa
> tolong dong...?
> Untuk itu, mohon bantuannya bagaimana caranya, untuk mengembalikan data
> terserbut, karena setelah saya restart komputernya dah blank aja. Sekarang
> rasanya mau gimana gitu... pada hal saya janji hari ini saya akan bawa
> skripsi dan programnya untuk ketemu dosen.
> Tolong dong ......... pls............... bantu selamatkan data
> saya...........
> Terimakasih.
> --
> Ubuntu register : #17041
> Blog : <>
> Blog :
> <>
> PH : 0761-7076141
> HP. 081933703356
> [Non-text portions of this message have been removed]

Website materi belajar umum:
Diskusi melalui web:
Materi belajar SAP ABAP

SEBELUM kirim email ke mailing list ini, pastikan bahwa anda

1. INGAT etiket umum dan topik diskusi milist ini
2. PAHAM isi email untuk konsumsi umum, bukan perseorangan
3. GANTI subject email jika ganti topik
5. TAHU      Quota Max : 2 email/orang/hari untuk topik sama kecuali moderator.
6. Hanya moderator yang boleh mengirimkan attachment.

Terlalu banyak email?

Kirimkan 1 email kosong ke : <-- 1 email rangkuman saja perhari. <-- Libur panjang/cuti lebaran ==> no email dan cek lewat web.

Lain-lain: <-- kembali ke pengiriman email normal <--- alamat moderator <--- jika ingin berlangganan

Milis lainnya: => majalah kom tek =>  milis lowongan kerja => Pengembangan karir dan bisnis => Belajar bahasa Inggris

Organisasi non-profit Indonesian Production and Operations Management
Society (IPOMS) - Turut Memajukan SDM dan Industri Indonesia.
Ingin mendaftar mailing list IPOMS dan manajemen produksi/operasi? Kirimkan email ke
Mendaftar ke web di bawah lebih dianjurkan untuk mempercepat pendaftaran:
Recent Activity
Visit Your Group
Yahoo! Finance

It's Now Personal

Guides, news,

advice & more.

New web site?

Drive traffic now.

Get your business

on Yahoo! search.

Yahoo! Groups

Women of Curves

Discuss food, fitness

and weight loss.


          Glassdoor CEO Ratings of Salesforce Competitors Puts IBM Last        
By Eugen Tarnow, Ph.D. Avalon Business Systems, Inc. The Wall Street Journal published an assessment by Salesforce of their competitors. The first line item was the CEO ...
          IBM - hip?!        
The pictures speak for themselves: the IBM Relay 2015 conference. By Eugen Tarnow, Ph.D. ReduceMail Pro has enterprise solutions to solve any issue related to email retention. We will work ...
          ReduceMail Pro now supports Notes V9.0.1        
As Lotus Notes / IBM Mail keeps changing, ReduceMail Pro is the archiving system that will keep up with those changes! The latest ReduceMail Pro system is now supporting V9.0.1. ReduceMail Pro has ...
          The future of SoftLayer is bright. And it’s Bluemix.        

Since the founding of SoftLayer in May of 2005, our motto has been “Innovate or Die.” Over the past decade, our business has grown exponentially and evolved to meet the needs of our customers and seize opportunities in the marketplace. The more things change, the more they stay the same.

Today, we’re excited to share the next big step in SoftLayer’s evolution as part of the IBM Cloud portfolio: IBM Bluemix is integrating SoftLayer products and services into its vast catalog of infrastructure, platform, and application services!

          IBM Cloud Object Storage Open Trial Now Available        

We're pleased to announce that our new Public Cloud Object Storage Standard Cross-Regional service is now available. Get started today with our Open Trial program and we’ll waive fees through December 31, 2016.

          Deploy a new VMware environment in hours on IBM Cloud        

Using advanced automation developed through the partnership between IBM and VMware, you can now go from weeks to hours in deploying a new VMware environment on IBM Cloud with two new offerings:

          Apache Hadoop and Big Data on IBM Cloud        

Companies are producing massive amounts of data—otherwise known as big data. There are many options available to manage big data and the analytics associated with it. One of the more popular options is Apache Hadoop, an open source software designed to scale up and down quickly with a high degree of fault tolerance. Hadoop lets organizations gather and examine large amounts of structured and unstructured data.

          â€œLift and Shift” Existing VMware Workloads to the Public Cloud        

Whatever your opinion is of IBM Cloud, the company has made tangible strides to provide a compelling hybrid cloud strategy for the enterprise. Several analysts even recently acknowledged IBM leadership in this area. Based on the recent announcement with VMware, you’ll understand why existing VMware clients are pretty excited about IBM Cloud’s hybrid strategy.

          Un escándalo de filtración de datos fuerza una remodelación del Gobierno en Suecia        
El caso obedece a los contratos que el Ejecutivo adjudicó en 2015 a IBM para externalizar los servicios de transporte cuyos datos habrían sido filtrados a terceros
          Comment on Has the IRS determined that the Rollover as Business Startups (ROBS) isn’t valid? by anon        
I am involved with a ROBS transaction with Guidant Financial Group. This is very hard to explain. Here is how Guidant works: Guidant Financial Group 1) Form a Corporation 2) Set up 401k for the C-Corp. Corporation sponsors a 401k plan designed to allow for investment into your corporation. This comes complete with a favorable determination letter from the IRS. 3) Guidant guides through the process rolling the pre existing 401k from previous employer (IBM) to the 401k of the new C-Corp (belongs to the employees). 4) As administrator of the C-Corp 401k I invested funds by buying Shares of the C-Corp. Now the business is debt free and cash rich from the sale of stocks. This how I funded the C-Corp to buy a Franchise in June of 2006. I now understand this is called a ROBS account, Roll Over as Business Start Up Transaction. After being let go from IBM for the second time in twenty four years in 2005 I investigated Franchises. While searching Franchise information on the internet I also came across the Guidant Financial Group and how to use your pension funds to buy a Franchise or start a new business. I had an Attorney and an Accountant review the Guidant process and they didn’t raise any big concerns so I entered into contract with Guidant, and purchased a Franchise called Stretch-n-Grow which is a Children's exercise program where I go into the area pre schools and teach a 30 minute exercise program. I purchased the Franchise in June 2006. What I thought was a great thing has turned into a Nightmare. I have been to my SCORE Chapter, talked to many Attorneys, Talked with different departments in the IRS, talked with the DOL. As nice as people have been the answer is always the same – I’m sorry but it’s not our jurisdiction. Including the startup fee($5,000.00) for Guidant I have spent over $25,000.00 in Guidant Recordkeeping fees and Accounting fees from 6/2006 to 2/29/2012 which is my business year end date. Are you starting with more that 250,000?????? I started with less than $98,000.00 in my 401k. From 2006 to 2008 ROBS promoters like Guidant incorrectly advised me that I didn’t have an annual filing of form 5500 because of a special exception in the form 5500 EZ instructions. This exception applies when plan assets are under a specified dollar amount (250,000) and the plan only covers only an individual, or an individual and spouse. In a ROBS arrangement the PLAN through its company stock investments owns the business; it’s not the individual that owns the business. Now all plan sponsors have to file a form 5500 annually. Guidant recordkeeping fees have gone from $800.00 in 2006 to $1,188.00 a year. Requirement of form 5500 a business valuation has to done annually cost from $750.00 – $2,000.00 (Guidant did have a presentation that said the valuation was free but when I asked I was directed to a site where you can order a valuation) Required fidelity bond has to be purchased annually. Workers Compensation Ins. - because the PLAN owns most of the stock and the stock is not owned by one individual I was billed $708.00 dollars annually because based on certain law they base my corp. on payroll of $31,200.00. My payroll was $11,000.00. Most officers starting a Corp. can be exempted form workers comp. ins. completely. (When I questioned Guidant about this they didn’t know about the law) Corporate payroll taxes are much more expensive. Now because everything with Guidant is so complicated you have to have a CPA that can understand and guide you through everything. This is costing me over $3,000.00 a year. I don’t know why the Accountant and Attorney didn’t have more warnings about entering into this kind of arrangement. There only two ways to get out of this type of arrangement which is going bankrupt. Or a complicated way of buying back your stocks which in my case I can’t afford. Guidant offers one hour advice with outside counsel, but this is the same attorney that helped get me in to this. I don't have any employee's and won’t because it would only make things worse and more costly. Because of the way the program is set up I haven’t been able to find an Attorney to advise me, and I can’t find an Accountant that is affordable. I am on my way to bankruptcy, and believe most small businesses can’t withstand the high costs of doing business with Guidant. I need help to find a way to get out of this mess that is affordable and legal. I want other people to know about ROBS transactions. I just received notice from Guidant that they are increasing the recordkeeping fee form $99.00 to $120.00 per month ($1,440.00) because they will be providing the Business Valuation which is now a requirement and justifies the recordkeeping fee increase. I feel completely stripped of any control over my account and it will make it impossible to get the shares down to be able to do a buy out of the stocks. How or Why would I expect Guidant to do anything that is to My best interest. I recently fired my CPA because it was brought to my attention that his fees were out of line at over $3,000.00 a year and he was becoming less forthcoming about itemizing his invoices. This was just additional insult to injury. I just read about IRS PLR201236035. The following is a report done by the IRS in the fall of 2010; this confirms my worries. Retirement News for Employers - Fall 2010 Edition - Rollovers as Business Start-Ups Compliance Project What is a ROBS? ROBS is an arrangement in which prospective business owners use their retirement funds to pay for new business start-up costs. ROBS plans, while not considered an abusive tax avoidance transaction, are questionable because they may solely benefit one individual – the individual who rolls over his or her existing retirement funds to the ROBS plan in a tax-free transaction. The ROBS plan then uses the rollover assets to purchase the stock of the new business. Promoters aggressively market ROBS arrangements to prospective business owners. In many cases, the company will apply to IRS for a favorable determination letter (DL) as a way to assure their clients that IRS approves the ROBS arrangement. The IRS issues a DL based on the plan’s terms meeting Internal Revenue Code requirements. DLs do not give plan sponsors protection from incorrectly applying the plan’s terms or from operating the plan in a discriminatory manner. When a plan sponsor administers a plan in a way that results in prohibited discrimination or engages in prohibited transactions, it can result in plan disqualification and adverse tax consequences to the plan’s sponsor and its participants. Employee Plans ROBS Project EP initiated a ROBS project last year to: • Define traits of compliant versus noncompliant ROBS plans; • Identify ROBS plans that are noncompliant and take action to correct them; and • Use results to design compliance strategies focusing on identified issues and trends (for example, Employee Plans Compliance Resolution System, Fix-It Guides, Web-based information, newsletters, and speeches). Using compliance checks, we initially focused on companies that sponsored a plan and received a DL but didn’t file a Form 5500, Annual Return/Report of Employee Benefit Plan, or Form 5500-EZ, Annual Return of One-Participant (Owners and Their Spouses) Retirement Plan, and/or Form 1120, U.S. Corporation Income Tax Return. Our contact letter to plan sponsors asked questions about the ROBS plan’s recordkeeping and information reporting requirements, including: • the plan’s current status • plan contribution history • information on the rollover or direct transfer of the assets into the ROBS plan • participant information • stock valuation and stock purchases • general information about the business itself • why no Form 5500 or 5500-EZ and/or Form 1120 were filed We always invite a plan sponsor to furnish any other documents or materials that they believe will be helpful for us to review as part of the compliance check. ROBS Project Findings New Business Failures Preliminary results from the ROBS Project indicate that, although there were a few success stories, most ROBS businesses either failed or were on the road to failure with high rates of bankruptcy (business and personal), liens (business and personal), and corporate dissolutions by individual Secretaries of State. Some of the individuals who started ROBS plans lost not only the retirement assets they accumulated over many years, but also their dream of owning a business. As a result, much of the retirement savings invested in their unsuccessful ROBS plan was depleted or ‘lost,’ in many cases even before they had begun to offer their product or service to the public. Not Filing Form 5500 or Form 1120 Many ROBS sponsors did not understand that a qualified plan is a separate entity with its own set of requirements. Promoters incorrectly advised some sponsors they did not have an annual filing requirement because of a special exception in the Form 5500-EZ instructions. The exception applies when plan assets are less than a specified dollar amount and the plan covers only an individual, or an individual and his or her spouse, who wholly own a trade or business, whether incorporated or unincorporated. In a ROBS arrangement, however, the plan, through its company stock investments, rather than the individual, owns the trade or business. Therefore, this filing exception does not apply to a ROBS plan and the annual Form 5500 or 5500-EZ (5500-SF for filing electronically) is still required. Specific Problems with ROBS Some other areas the ROBS plan could run into trouble: • After the ROBS plan sponsor purchases the new company’s employer stock with the rollover funds, the sponsor amends the plan to prevent other participants from purchasing stock. • If the sponsor amends the plan to prevent other employees from participating after the DL is issued, this may violate the Code qualification requirements. These types of amendments tend to result in problems with coverage, discrimination and potentially result in violations of benefits, rights and features requirements. • Promoter fees • Valuation of assets • Failure to issue a Form 1099-R, Distributions From Pensions, Annuities, Retirement or Profit-Sharing Plans, IRAs, Insurance Contracts, etc., when the assets are rolled over into the ROBS plan If You Have Questions E-mail us and we will answer your questions about the Project and how it relates to your situation. Include the words “ROBS Project” in the subject line. Additionally, we encourage you to e-mail any comment for the ROBS Project or any other EPCU project, especially if these suggestions focus on areas of potential noncompliance.
          18.02.2008 Skrót wiadomości - wersja audio         
Dziś w Internet bardziej niebezpieczny niż myślimy; Samsung SSD 64 GB z SATA II trafia do produkcji; PS3 po raz pierwszy pokonała Xboksa 360; Torrenty w czołgu; IBM udostępnia edukacyjne MMO dla nastolatków;
          Lenovo IBM ThinkPad Edge E520 لولا لپ تاپ لنوو - 740,000 Rial         

آکبند و پلمپ شرکت سازنده

          IBM Datacap 9.0, 9.0.1, 9.1.0 and 9.1.1 DDK: Customizing ruleset configuration panels for FastDoc and Datacap Studio        
IBM Datacap provides ruleset configuration panels, which are used at application design time in FastDoc and Datacap Studio, allowing easy ruleset configuration by providing a UI that prompts the user for configuration settings and then creates the appropriate ruleset XML. Additional custom ruleset panels can be created using the provided Visual Studio C# template.
          Apply middleware maintenance to patterns and instances in IBM PureApplication System        
Learn how to apply middleware maintenance on IBM PureApplication System by using IBM Installation Manager. In this video, you go through the contents and structure of the IBM Installation Manager Repository. Then, you learn how to apply emergency fix packs and content from the IBM Installation Manager repository to patterns and deployed pattern instances.
          Using IBM Database Add-ins for Visual Studio 2013 in DB2 Cancun (10.5 Fix Pack 4)        
This tutorial explains the key new capabilities in IBM Database Add-ins for Visual Studio 2013 available with the DB2 10.5 Fix Pack 4. The authors explain support of the Microsoft Visual Studio 2013 feature set with IBM data servers (DB2 for z/OS; DB2 for i; DB2 for Linux, UNIX, and Windows; and Informix).
          Data integration and analytics as a service, Part 1: DataWorks        
Most data integration specialists find that data loading and migration from a source to target are usually time-consuming and tedious tasks to perform. Now with the IBM Bluemix DataWorks service, you can load and migrate data from different sources to different targets easily. IBM DataWorks service, which includes DataWorks APIs and DataWorks Forge, allows developers to load, cleanse and profile data, in addition to migrating to different targets seamlessly. DataWorks Forge is primarily for knowledge workers and helps them to select data, visualize, and prepare it for use after enriching and improving its quality. This tutorial is Part 1 of a series covering data integration and analytics as a service.
          Connect your apps to DB2 with high-security Kerberos        
This tutorial is a primer to help programmers using IBM Data Server Drivers get applications quickly running in a Kerberos environment. We will be setting up a simple Kerberos environment on Windows, configuring DB2 to use Kerberos authentication, and enabling the client drivers to securely authenticate using Kerberos.
          Leverage DB2 Connect for insert operations in existing C/C++ IBM Data Server applications        
This tutorial explains the key best practices when developing C/C++ applications against the IBM Data Servers; (DB2 for z/OS; DB2 for i; DB2 for Linux, UNIX, and Windows; and Informix). It provides details for leveraging several of the features in DB2 Connect that pave the way for better performance and align with best-practice recommendations. You can use this information while developing/enhancing existing applications in C/C++ targeting IBM Data Servers.
          Optimizing cloud applications with DB2 stored procedures        
This tutorial describes the IBM DB2 stored procedure framework, methods to monitor stored procedure performance, and methods to optimize stored procedure performance. DB2 provides a routine monitoring framework that helps pinpoint the statements or parts of the procedure code that can be tuned for better performance. The tutorial also describes good practices for writing DB2 SQL/PL and Oracle PL/SQL procedures and simple way of migrating Oracle PL/SQL procedures to DB2.
          Setting up your DB2 subsystem for query acceleration with DB2 Analytics Accelerator for z/OS        
Adding IBM DB2 Analytics Accelerator for z/OS to the DB2 for z/OS environments has enabled companies in a variety of industries from major banks and retailers to IT services and healthcare providers to significantly improve query processing and increase analytics capabilities. Providing efficiency and cost-effectiveness, DB2 Analytics Accelerator can process certain types of eligible queries, especially business intelligence queries, faster than DB2.
          Establish an information governance policy framework in InfoSphere Information Governance Catalog        
With the substantial growth in data volume, velocity, and variety comes a corresponding need to govern and manage the risk, quality, and cost of that data and provide higher confidence for its use. This is the domain of information governance, but it is a domain that many people struggle with in how to get started. This article provides a starting framework for information governance built around IBM InfoSphere Information Governance Catalog.
          Build a DB2 CLI console to manage SQLDB databases        
Manage your SQLDB databases with ease, using an application you can quickly build and deploy on the IBM cloud platform, Bluemix.
          Increase throughput with z/OS Language Environment heap storage tuning method        
The z/OS Language Environment (LE) component provides a common runtime environment for the IBM version of certain high-level languages. LE provides runtime options that can be customized according to the programs behaviors to achieve better execution performance. This paper puts forward an LE heap storage tuning method for IBM's InfoSphere Data Replication for DB2 for z/OS (Q Replication). The tuning reduces contentions of concurrent heap storage allocation requests among multiple threads of the Q Capture program and Q Apply program of Q Replication for z/OS while keeping the heap storage overall allocation to a minimum. After applying the heap tuning techniques outline in this paper, a notable 13% throughput performance was achieved for OLTP type workloads and CPU reduction was noticed for all workload types
          Use industry templates for advanced case management, Part 1: Introducing the Credit Card Dispute Management sample solution template for IBM Case Manager        
IBM Case Manager provides the platform and tools for a business analyst to define and implement a new generation of case management solutions. To accelerate the development of solutions in particular industries, IBM Case Manager supports the notion of a solution template, which is a collection of case management assets that can be customized and extended to build a complete solution. To illustrate the value of solution templates and the features of IBM Case Manager, IBM has provided two sample solution templates that can be used as learning tools for users new to the platform. This tutorial introduces one of those templates: Credit Card Dispute Management from the financial services industry. This sample template can serve as a foundation for clients who want to build a similar solution. The template can also serve as a learning tool and reference for clients to build other solutions in other industries.
          Using temporal tables in DB2 10 for z/OS and DB2 11 for z/OS        
Temporal tables were introduced in IBM DB2 10 for z/OS and enhanced in V11. If you have to maintain historical versions of data over several years, temporal tables can be helpful for period-based data. In this tutorial, explore how your applications can use temporal tables to manage different versions of data, simplify service logic, and provide information for auditing. Learn about when and how to use three types of temporal tables to manage period-based data.
          Use industry templates for advanced case management, Part 2: Introducing the Auto Claims Management sample solution template for IBM Case Manager        
IBM Case Manager provides the platform and tools for business analysts to define and implement a new generation of case management solutions. To accelerate the development of solutions in particular industries, IBM Case Manager supports the notion of a solution template a collection of case management assets that can be customized and extended to build a complete solution. To help illustrate the value of solution templates and the abilities of IBM Case Manager, IBM has provided two sample solution templates that can be used as learning tools for new users of the platform. This tutorial introduces one of those templates Auto Claims Management from the insurance services industry. Gain an understanding of what a template is, and learn about the assets delivered in this sample template and how they were built. (This tutorial includes the code for this sample template as well as instructions on how to deploy it.)
          Using the MDM Application Toolkit to build MDM-centric business processes, Part 5: Security        
This is the fifth article in a series that describes how to create process applications for master data by using IBM Business Process Manager (BPM). This series refers to the InfoSphere Master Data Management (MDM) Application Toolkit and IBM BPM 8.0.1, both of which are provided with InfoSphere MDM 11.0. This tutorial guides you through several security issues when creating MDM processes using the Application Toolkit. Learn about managing security issues when connecting to an MDM server, enabling encrypted flows between your process and MDM, certificate management, and restricting the REST service to HTTPS.
          Monitor your database without logging        
Jose Bravo demonstrates how to set up the integration between IBM Security QRadar SIEM and IBM Guardium to create an efficient, low-impact database monitoring solution. He then walks through a typical use case scenario where an unauthorized transaction on a database is detected and raised as a security offense in the QRadar SIEM.
          Configure multiple HADR databases in a DB2 instance for automated failover using Tivoli System Automation for Multiplatforms        
Learn how to enable automated failover support using IBM Tivoli System Automation for Multiplatforms for multiple databases configured for High Availability Disaster Recovery in a single DB2 instance. Walk through scenarios that use db2haicu in interactive mode and with an XML file as input. The example setup is for DB2 Enterprise Server Edition environments in Linux or AIX with DB2 9.5 and higher.
          Best practices for IBM InfoSphere Blueprint Director, Part 3: Sharing Information Architectures through InfoSphere Blueprint Director        
This article provides best practices on publishing information architecture blueprints using IBM InfoSphere Blueprint Director. Publishing architecture blueprints enables sharing of the most current solution architecture with all team members allowing everyone to experience the same project vision.
          IBM Accelerator for Machine Data Analytics, Part 5: Speeding up analysis of structured data together with unstructured data        
Previously in this series, you created a searchable repository of semi-structured and unstructured data -- namely, Apache web access logs, WebSphere logs, Oracle logs, and email data. In this tutorial, you will enrich the repository with structured data exported from a customer database. Specifically, you will search across structured customer information and semi-structured and unstructured logs and emails, and perform analysis using BigSheets to identify which customers who emailed Sample Outdoors Company during the July 14th outage were more loyal than others.
          IBM Accelerator for Machine Data Analytics, Part 3: Speeding up machine data searching        
Machine logs from diverse sources are generated in an enterprise in voluminous quantities. IBM Accelerator for Machine Data Analytics simplifies the task of implementation required so analysis of semi-structured, unstructured or structured textual data is accelerated.
          DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611 prep, Part 2: Physical design        
This tutorial discusses the creation of IBM DB2/reg> databases, as well as various methods used for placing and storing objects within a database. The focus is on partitioning, compression, and XML, which are all important performance and application development concepts you need to store and access data quickly and efficiently. This is second in a series of eight tutorials you can use to help you prepare for the DB2 10.1 DBA for Linux, UNIX, and Windows certification exam 611. The material in this tutorial primarily covers the objectives in Section 2 of the exam.
          IBM Accelerator for Machine Data Analytics, Part 2: Speeding up analysis of new log types        
Machine logs from diverse sources are generated in an enterprise in voluminous quantities. IBM Accelerator for Machine Data Analytics simplifies the task of implementation required so analysis of semi-structured, unstructured or structured textual data is accelerated.
          System Administration Certification exam 919 for Informix 11.70 prep, Part 6: Informix Data Warehousing        
In this tutorial, you'll learn about IBM Informix Data warehousing concepts and the tools that you can use to create data warehouses and optimize your data warehouse queries. This tutorial prepares you for Part 7 of the System Administration Certification exam 919 for Informix v11.70.
          Policy monitoring reports security setup with InfoSphere Master Data Management and Tivoli Directory Server        
The Policy Monitoring component is introduced in IBM's InfoSphere Master Data Management (MDM) v10.1 release. Using IBM Cognos Business Intelligence reporting tools, Policy Monitoring enables organizations to report on data quality by using aggregated metrics and to establish policies for compliance with data quality thresholds. This tutorial provides detailed steps to set up a basic security model in IBM Cognos Business Intelligence for providing authentication and authorization for Policy Monitoring reports.
          Use IBM InfoSphere Information Server to transform legacy data into information services        
Learn how to create and deploy information Sservices to access legacy databases without writing any code. The generated Web services are created using the IBM Information Server components including InfoSphere DataStage, InfoSphere Federation Server, InfoSphere Information Services Director, and WebSphere Transformation Extender for DataStage. In this example, the information services are delivered using a standard government XML model (GJXDM).
          Use IBM InfoSphere Information Server to transform legacy data into information services        
Learn how to create and deploy information Sservices to access legacy databases without writing any code. The generated Web services are created using the IBM Information Server components including InfoSphere DataStage, InfoSphere Federation Server, InfoSphere Information Services Director, and WebSphere Transformation Extender for DataStage. In this example, the information services are delivered using a standard government XML model (GJXDM).
          Use IBM InfoSphere Optim Query Workload Tuner 3.1.1 to tune statements in DB2 for Linux, UNIX, and Windows, and DB2 for z/OS that reference session tables        
IBM InfoSphere Optim Query Workload Tuner (OQWT) 3.1.1 can tune statements for IBM DB2 for Linux, UNIX, and Windows, and IBM DB2 for z/OS. This document describes how to use OQWT to tune a statement that accesses one or more session tables. Two methods are presented on how to set up the database environment for the session table such that OQWT 3.1.1 can tune statements using the table. Examples are provided for a script that is required to set up the environment, including example snapshots of the output and functionality of the applicable OQWT tuning features.
          DB2 10.1 Fundamentals certification exam 610 prep: Part 5: Working with tables, views, and indexes        
This tutorial discusses IBM DB2 10.1 support for data types, tables, views, triggers, constraints and indexes. It explains the features of these objects, how to create and manipulate them using Structured Query Language (SQL), and how they can be used in an application. This tutorial is the fifth in a series that you can use to help prepare for the DB2 10.1 Fundamentals certification exam 610.
          Resource description framework application development in DB2 10 for Linux, UNIX, and Windows, Part 2: Optimize your RDF data stores in DB2 and provide fine-grained access control        
The Resource Description Framework (RDF) is a family of W3 specification standards that enables the exchange of data and metadata. Using IBM DB2 10 for Linux, UNIX, and Windows Enterprise Server Edition, applications can store and query RDF data. This tutorial looks at the characteristics of RDF data and describes the process for creating optimized stores. In addition, it describes how to provide fine-grained access control to RDF stores using either the DB2 engine or the application. It includes a sample application.
          IBM InfoSphere Optim Query Capture and Replay 1.1 for Linux, UNIX, and Windows, Part 1: Introduction to OQCR        
IBM InfoSphere Optim Query Capture and Replay (IOQCR) 1.1 for Linux, UNIX and Windows enable an organization to create a production-like data application test environment where changes can be tested and tuned before being deployed into production. InfoSphere Optim Query Capture and Replay captures all of the application workload running against a production database and replays it against a test database without the need to replicate the entire application infrastructure. It not only replays both dynamic and static SQL, but also reproduces the number of client connections and their properties, the timing and order of execution, transaction boundaries and isolation levels, and many other critical features of workload execution. The result is a much closer approximation of production workloads and greater confidence that when changes are deployed, they will not disrupt production.
          System Administration Certification exam 919 for Informix 11.70 prep, Part 2: Informix space management        
In this tutorial, you'll learn how to configure and manage storage spaces on IBM Informix(R) database, the utilities to create those storage spaces, and how to use fragmentation and features to optimize the storage in the database. This tutorial prepares you for Part 2 of the System Administration Certification exam 919 for Informix v11.70.
          Exploring IMS disaster recovery solutions, Part 4: Coordinated IMS and DB2 solutions        
Every customer needs a disaster recovery (DR) plan. The strategy will differ from one customer to the next. For IMS, there are two types of DR solutions: 1) IMS specific, and 2) Storage Mirroring. In this tutorial, we explore the IMS specific DR solutions. There are solutions that use only the IMS base product and solutions that use the IBM IMS Tools products. For each DR solution, there will be a discussion of the key concepts related to that solution.
          System Administration Certification exam 919 for Informix 11.70 prep, Part 5: Informix backup and restore        
In this tutorial, you'll learn about IBM Informix(R) database backup and restore concepts and strategies, and you'll learn about utilities and commands for managing your database backup and restore processes. In addition, learn how to monitor your backups and perform problem determination when necessary. This tutorial prepares you for Part 5 of the System Administration Certification exam 919 for Informix v11.70.
          System Administration Certification exam 919 for Informix 11.70 prep, Part 4: Performance tuning        
Tune IBM Informix(R) database server and its different subsystems for optimum performance. After an overview, follow along with examples on how to look at the database server and its subsystems. Learn about important database optimization elements, including checkpoints, recovery, physical logging, logical logging, asynchronous I/O VP, network parameters, disk resources, CPU VP resources, PDQ, memory grant manager, scan threads, index creation, statistics maintenance, and self tuning. Use this tutorial, the fourth in a series of eight tutorials, to help prepare for Part 4 of the Informix 11.70 exam 919.
          Designing an integration landscape with IBM InfoSphere Foundation Tools and Information Server, Part 1: Planning an integration landscape        
This tutorial is an introduction to the use of IBM InfoSphere Blueprint Director, in the context of a project, to depict the target vision (or landscape) for the final solution and to provide guidance for subsequent project tasks. It is the first of a series of tutorials focused on a specific, common information integration scenario: the update of a Data Warehouse-Business Intelligence (DW-BI) information process.
          Managing and scheduling database jobs with the Data Studio Web Console        
As the number of databases increase in many organizations, many DBAs are facing a major challenge in automating and scheduling their database operations. The new job management capability in IBM Data Studio provide DBAs with a simple and flexible way to create and manage database jobs and to schedule command scripts to run automatically.
          System Administration Certification exam 919 for Informix 11.70 prep, Part 3: System activity monitoring        
In this tutorial, you'll learn about IBM Informix(R) database tools, the utilities to monitor the database, and how to diagnose problems. Learn how to use the system-monitoring interface (SMI) and the SQL administration API. This tutorial prepares you for Part 3 of the System Administration Certification exam 919 for Informix v11.70.
          Implement custom query transactions for IBM InfoSphere Master Data Management Server        
Learn how to extend IBM InfoSphere Master Data Management Server by implementing new query transactions using the MDM Server Workbench.
          Integrating SPSS Model Scoring in InfoSphere Streams, Part 1: Calling Solution Publisher from an InfoSphere Streams operator        
This tutorial describes how to write and use an InfoSphere Streams operator to execute an IBM SPSS Modeler predictive model in an InfoSphere Streams application using the IBM SPSS Modeler Solution Publisher Runtime Library API.
          Integrating SPSS Model Scoring in InfoSphere Streams, Part 2: Using a generic operator        
Part 1 of this series describes how to write and use an InfoSphere Streams operator to execute an IBM SPSS Modeler predictive model in an InfoSphere Streams application using the IBM SPSS Modeler Solution Publisher Runtime library API. Part 2 takes the non-generic operator produced in Part 1 and extends it to be a generic operator capable of being used with any SPSS Modeler stream without any custom C++ coding needed.
          Solving problems in the DB2 pureScale cluster services environment        
This tutorial guides DBAs and system administrators in problem determination for DB2 pureScale cluster services. As you deploy IBM DB2 pureScale Feature for DB2 Enterprise Server Edition systems into production, you need to acquire appropriate problem determination skills. This tutorial provides information about gathering diagnostic information when failures occur, and provides additional information to aid in understanding the tightly integrated subcomponents of the DB2 pureScale Feature, such as the Cluster Caching Facility (CF), General Parallel File System (GPFS), Reliable Scalable Cluster Technology (RSCT), and IBM Tivoli Systems Automation for Multiplatforms (Tivoli SA MP).
          Integrate the rich Internet application framework ZK with Informix to build real-world applications        
This tutorial presents a real-world example that integrates IBM Informix and ZK, a rich Internet application (RIA) framework. Informix is a flagship IBM RDBMS product, while ZK is a Java-based web application framework supporting Ajax applications. This event-driven framework enables creation of rich user interfaces with minimal knowledge and use of JavaScript. ZK's unique server-centric approach enables synchronization of components and events across the client and server via the core engine.
          Migrating InfoSphere Streams SPADE applications to Streams Processing Language, Part 3: Migrate SPADE user-defined function applications        
The most significant new feature of Version 2.0 of the IBM InfoSphere(R) Streams product is the programming language model transformation from Streams Processing Application Declarative Engine (SPADE) to Streams Processing Language (SPL). Users with SPADE applications from previous versions will need to migrate and port their applications to SPL when upgrading their installations to Version 2.0. This tutorial is Part 3 of a 5-part series that uses actual SPADE samples to demonstrate a series of step-by-step procedures for migrating and porting different types of SPADE application content. Part 3 demonstrates the migration of SPADE user-defined function applications.
          Recommended practices for using Cognos with Informix, Part 2: Deploy Informix with IBM Cognos BI Server 10        
Connecting your Informix databases to IBM Cognos Business Intelligence software gives you a way to unleash the power of your data with expanded query, reporting, and analysis capabilities. If you're ready to take that step, this two-part tutorial series gives you the information you need to install, configure, and deploy the necessary components to achieve the best results. Part 1 showed how to get started with using IBM Cognos Express V9 together with IBM Informix V11.5 as a content store and data source. In Part 2, you'll get the same level of detail for deploying Informix with IBM Cognos BI Server V10. The tutorials include recommended practices for each step along the way, based on lessons learned from real-world deployments on the Windows operating system.
          Migrating InfoSphere Streams SPADE applications to Streams Processing Language, Part 5: Migrate SPADE user-defined built-in operator (UBOP) applications        
The most significant new feature of Version 2.0 of the IBM InfoSphere(R) Streams product is the programming language model transformation from Streams Processing Application Declarative Engine (SPADE) to Streams Processing Language (SPL). Users with SPADE applications from previous versions will need to migrate and port their applications to SPL when upgrading their installations to Version 2.0. This tutorial is Part 5 of a 5-part series that uses actual SPADE samples to demonstrate a series of step-by-step procedures for migrating and porting different types of SPADE application content. Part 5 demonstrates the migration of SPADE user-defined built-in operator (UBOP) applications.
          Migrating InfoSphere Streams SPADE applications to Streams Processing Language, Part 4: Migrate SPADE user-defined operator (UDOP) applications        
The most significant new feature of Version 2.0 of the IBM InfoSphere(R) Streams product is the programming language model transformation from Streams Processing Application Declarative Engine (SPADE) to Streams Processing Language (SPL). Users with SPADE applications from previous versions will need to migrate and port their applications to SPL when upgrading their installations to Version 2.0. This tutorial is Part 4 of a 5-part series that uses actual SPADE samples to demonstrate a series of step-by-step procedures for migrating and porting different types of SPADE application content. Part 4 demonstrates the migration of SPADE user-defined operator (UDOP) applications.
          Migrating InfoSphere Streams SPADE applications to Streams Processing Language, Part 2: Migrate SPADE mixed-mode applications        
The most significant new feature of Version 2.0 of the IBM InfoSphere(R) Streams product is the programming language model transformation from Streams Processing Application Declarative Engine (SPADE) to Streams Processing Language (SPL). Users with SPADE applications from previous versions will need to migrate and port their applications to SPL when upgrading their installations to Version 2.0. This tutorial is Part 2 of a 5-part series that uses actual SPADE samples to demonstrate a series of step-by-step procedures for migrating and porting different types of SPADE application content. Part 2 demonstrates the migration of SPADE mixed-mode applications.
          Migrating InfoSphere Streams SPADE applications to Streams Processing Language, Part 1: Migrate basic SPADE applications        
The most significant new feature of Version 2.0 of the IBM InfoSphere(R) Streams product is the programming language model transformation from Streams Processing Application Declarative Engine (SPADE) to Streams Processing Language (SPL). Users with SPADE applications from previous versions will need to migrate and port their applications to SPL when upgrading their installations to Version 2.0. This tutorial is Part 1 of a 5-part series that uses actual SPADE samples to demonstrate a series of step-by-step procedures for migrating and porting different types of SPADE application content. Part 1 demonstrates the migration of basic SPADE applications.
          The Informix Detective Game        
Here's a fun way to learn about IBM Informix! Learn or teach the basics of Informix and relational databases with an interactive game called the Informix Detective Game (the game's theme is a crime investigation). The game teaches relational database concepts and shows how technology can be applied to solving real-life problems. The Informix Detective Game is based on the DB2 Detective Game created by Joanna Kubasta and Joanne Moore.
          Take a beginner's tour of the Informix virtual table interface with shared libraries        
IBM Informix (R) provides access to external data sources through the virtual table interface (VTI). The VTI provides a set of hooks called purpose functions. As a developer, your task is to create an access method that implements VTI purpose functions and as many additional user-defined routines (UDRs) as necessary to access your external data source. This tutorial shows you how to compile and run your VTI UDR as a shared library.
          Migrate a database from MySQL to IBM Informix Innovator-C Edition, Part 2: Step-by-step walk-through of the migration process        
Walk through a migration from MySQL to Informix, step by step. The tutorial provides a conversion methodology and discusses the processes for migrating both database objects and data. It includes a discussion of SQL differences and shows how to migrate tables, views, stored procedures, functions, triggers, and more.
          Managing pureQuery-enabled applications efficiently, Part 1: Set up an SQL management repository using an Ant script        
IBM Optim Development Studio and the pureQuery Runtime include a command-line utility called ManageRepository that can be used to create, modify, export, import, and delete pureQuery metadata that is stored in the SQL management repository. Setting up an SQL management repository can be challenging using the ManageRepository utility command script. This tutorial shows you how to create and manage an SQL repository using an Ant script. You will also learn how to run the Ant script from within IBM Optim Development Studio.
          Develop mapping models with IBM InfoSphere Data Architect        
Designing the mappings for an extract, transform, and load (ETL) process is a critical step in a data warehouse project. Mappings must be easy to modify, capable of version control, easily reported, and easily exported to other formats. This tutorial illustrates how to develop a complete source-to-target mapping model using InfoSphere(TM) Data Architect. You will also learn about the reporting functions that InfoSphere Data Architect provides.
          Use the IBM Industry Model Information Insurance Warehouse to define smart and mature data models        
In this tutorial, understand the method for developing data models for data warehouse projects using the IBM Industry Model Insurance Information Warehouse (IIW), which is part of the IBM Industry Models product defined for the domain of insurance. The tutorial shows the best approach to develop core data warehouse (CDW) models and data mart (DM) models. The tutorial also introduces the recommended data warehousing development method (DWDM) to deal with the IIW model pattern framework to architect DWH solutions for insurance companies.
          Use CSV and XML import methods to populate, update, and enhance your InfoSphere Business Glossary content        
IBM InfoSphere Business Glossary enables you to create, manage, and share an enterprise vocabulary and classification system. In Version 8.1.1, the InfoSphere Business Glossary introduced some new CSV and XML import and export methods to populate a business glossary with data. This tutorial provides technical instructions, tips, and examples to help you implement these new features to efficiently create a business glossary.
          High-performance solution to feeding a data warehouse with real-time data, Part 2: Explore the integration options with staging tables and WebSphere MQ messages        
Feeding a data warehouse with changes from the source database can be very expensive. If the extraction is only done with SQL, there is no way to easily identify the rows that have been changed. IBM InfoSphere(TM) Replication Server can detect changed data by reading only the database log. This series shows how to use InfoSphere Replication Server to efficiently extract only the changed data and how to pass the changes to IBM InfoSphere DataStage(R) to feed the data warehouse. Part 1 of the 2-part series provided an overview of these products and how they can work together. In this Part 2, explore two integration options: using WebSphere(R) MQ messages with InfoSphere Event Publisher and using staging tables.
          Integrate enterprise metadata with IBM InfoSphere and Cognos        
Knowledge about the quality and correctness of the huge volumes of data that drive day-to-day activities for enterprises and organizations is essential for effective decision making. Use this tutorial to learn how to gain visibility into your metadata, which in turn will lead to increased trust in data reliability, increased agility, and improved common understanding throughout your enterprise. This tutorial describes the significance of business and technical metadata integration and shows how heterogeneous metadata in an enterprise can be integrated using various IBM products. After a brief overview of the business issues and the integration solution, the tutorial provides a step-by-step guide showing you how to integrate metadata using tools from the IBM InfoSphere and Cognos product suites.
          Recommended practices for using Cognos with Informix, Part 1: Deploy Informix with IBM Cognos Express 9        
Connecting your Informix databases to IBM Cognos Business Intelligence software gives you a way to unleash the power of your data with expanded query, reporting, and analysis capabilities. If you're ready to take that step, this two-part tutorial series gives you the information you need to install, configure, and deploy the necessary components to achieve the best results. Part 1 gets you started with using IBM Cognos Express V9 together with IBM Informix V11.5 as a content store and data source. In Part 2, you'll get the same level of detail for deploying Informix with IBM Cognos BI Server V10. The tutorials include recommended practices for each step along the way, based on lessons learned from real-world deployments on the Windows operating system.
          Automate DB2 9.7 database maintenance in an embedded database environment        
Within an embedded database environment, it is important that you, as a database administrator, automate as many maintenance tasks as possible so that you can run the database with minimal intervention. IBM DB2 for Linux, UNIX, and Windows provides advanced automation features for configuring, tuning, and managing databases. These automation features allow you to spend less time managing routine tasks, and more time focusing on strategic issues that help you business gain and maintain a competitive advantage. This tutorial shows you how to automate routine maintenance tasks for DB2 on Linux or UNIX.
          Fix password issue on IBM Thinkpad        

New Discussion Post by TheRealRaven

The post Fix password issue on IBM Thinkpad appeared first on IT Answers.

          New Florida Law Lets Residents Challenge School Textbooks        
Keith Flaugh is a retired IBM executive living in Naples, Fla., and a man with a mission. He describes it as "getting the school boards to recognize ... the garbage that's in our textbooks." Flaugh helped found Florida Citizens' Alliance , a conservative group that fought unsuccessfully to stop Florida from signing on to Common Core educational standards. More recently, the group has turned its attention to the books being used in Florida's schools. A new state law , developed and pushed through by Flaugh's group, allows parents, and any residents, to challenge the use of textbooks and instructional materials they find objectionable via an independent hearing. Flaugh finds many objections with the books used by Florida students. Two years ago, members of the alliance did what he calls a "deep dive" into 60 textbooks . "We found them to be full of political indoctrination, religious indoctrination, revisionist history and distorting our founding values and principles, even a significant
          Your guide to top tech conferences 2017        

IBM InterConnect, Code/Media, SXSW, Google Next, Microsoft WPC, Dreamforce '17, Adobe Summit...conferences you may have only dreamed of attending! So many technology conferences, so little time (or money).

Maybe you're looking to stay on top of emerging trends in your industry, or get a read on what your customers are excited about. Perhaps you're actively networking for customers, vendors or even a new job. Or you might simply be looking to mingle with your peers in more informal surroundings.

Regardless of your intent, there's never a bad reason to maintain a solid network of professionals and experts in your vertical or other fields.

We also know it's hard to keep track of all the great technology conferences you might want to attend in any given month, quarter or year.

To read this article in full or to leave a comment, please click here

          Western Digital скупает у IBM патенты для устройств хранения данных        
Компания IBM сообщила, что компания Western Digital приобрела у неё свыше 100 патентов, а также заключила с ней перекрёстное лицензионное соглашение....
          IBM: Sketching The Future of Business Intelligence and Social Interaction        
We asked Robert Ashe of IBM to sketch how he sees business intelligence and collaboration services fit into IBM's future. What he shows is ... IBM: Sketching The Future of Business Intelligence and Social Interaction

Share this



Captain Garmo speaks out on community outreach, police brutality and immigration


By Kendra Sitton

August 7, 2017 (Rancho San Diego) -- Cities across San Diego celebrated the National Night Out Against Crime on Tuesday, Aug. 1st. The Rancho San Diego Command hosted an event in the Target parking lot, where community members and police officers grilled hot dogs and visited booths. The National Night Out Against Crime is part of the wider public outreach strategy utilized by the RSD Command.

“We’re really involved in community policing. We take a lot of pride in doing a lot of outreach through faith-based and through community groups,” said Captain Marco Garmo. “Community outreach is so, so important. It’s just as important as putting bad guys in jail. They hold equal ground. We have to put resources towards it.”

Some of the organizations represented at the event included Crime Prevention Services, Senior Volunteers, County of San Diego Parks and Recreation, Federal Bureau of Investigation, Otay Water District, Lighthouse Baptist Church, Bright Hope Community Church, The Casa de Oro Community Alliance and the Fire Department.

The event  has been a national tradition since 1984. This year’s event was held on a day filled with mugginess, showers, and flash flood warnings which may have dampened people’s desire to attend. The small crowd allowed the people running the booths to visit each other and socialize with the deputies in attendance.

“As far as this event goes, what’s great about this is people get to come up and talk to us and realize we’re just like everybody else. We’re human beings and we have the same likes and dislikes. We look like everyone else. We’re approachable. We’re normal, average people. We’re not running around looking for someone to arrest or write a ticket,” Garmo said.

He stressed the importance of community members having a relationship with police outside of an enforcement posture. He believes only seeing law enforcement when they are taking people to jail creates the distrust of police that can put residents and officers in harmful situations.  Through this event, he said, “They realize there’s more to us than just putting people in jail.”

Events such as the Night Out Against Crime do more than simply informing residents what services are available to them and the people providing those service. It also connects residents to new opportunities to serve their communities.

Lisa Powell attended another event hosted by the department, “Coffee with the Community,” where she first learned about the Casa de Oro Alliance. Through that event, she is now an avid member of the alliance which is working to revitalize Casa de Oro.

John Kline began volunteering in the department three years ago so that he was busy after retiring from IBM and quickly worked his way up into being the administrator who does the scheduling and leads the volunteers. He greeted people at the Senior Volunteer booth while touting the important role the volunteers play during the six hours, though often more, they patrol. “It’s a good bunch of people we have,” said Kline. According to Kline, the Rancho San Diego Command covers the biggest geographic area in the County which makes their assistance to the staff invaluable.

In the geographic area covered by RSD Command, there are also a variety of groups officers must be prepared to interact with. “We are the most diverse Sheriff’s jurisdiction within the entire department. The area we’re in now is one of the largest Iraqi populations within the United States. The Lemon Grove station falls under the Rancho San Diego Command, so Lemon Grove has the Somali community and actually has an Islamic mosque. Spring Valley is 12% African American where the county average is six percent, so we have twice the county average of the African American community,” said Garmo.

As Captain, Garmo has made sure that his officers are culturally aware and remain sensitive to the needs of the people they serve. “We’re very constitutional in the way we police in this region…it is very important that my deputies and myself put out a very positive image and to change that narrative and to give people a more positive perception of law enforcement,” he said.

The event came a few days after President Donald Trump joked in Long Island that police should not worry about injuring suspects when they are arrested, which drew widespread criticism from advocates against police brutality.

“Donald Trump is not a cop and he is not from the law enforcement community. He’s a wealthy businessman. Although I do respect him as our president, and the support he gives the law enforcement community, I don’t really think he’s in the position to be making those types of statements,” Garmo said. The department already has a “force plus one” policy in place, meaning that if someone swings at an officer, the officer can use a baton, taser, or pepper spray. Garmo said, “I believe force should be used in accordance with the force being used. He’s (Donald Trump) not going to change the way we do business.”

San Diego County Sheriff Bill Gore has already stated that police will not be acting as immigration officers despite the change in Department of Homeland Security rules under President Trump.

“Our immigration policies are not changing. We do not ask people their immigration status. That is not our business. We are here to serve people in San Diego whether they are here legally or illegally,” Garmo said, “The last thing I would want, because I come from an ethnic community myself-- I’m a first generation Chaldean Arab-American, and the last thing I would want in my community is people fearful to report a crime and allow themselves to be victims for fear of being deported. That’s not our place.”

Garmo sees the success of community engagement in every year they do not have a situation such as that which occurred in Ferguson, Missouri, where riots occurred following the shooting of an unarmed black youth by a white police officer.

Last year, RSD Command witnessed neighboring El Cajon, which is in the Sheriff’s jurisdiction, face a wave of protests after the officer-involved shooting of Alfred Olango.

Community outreach is “very important because if the community doesn’t trust you, when you have a questionable shooting, they’re not going to give you the benefit of the doubt. They’re going to wait for something like that so they can attack you regardless of whether the cop did right or wrong. It becomes about `All we ever see you guys do is arrest us, so now we have something to latch onto and protest and talk about.’ It really doesn’t ever become about that incident. It’s a bigger problem. And if law enforcement agencies don’t see that, they’re way behind the times,” Garmo concluded.

          IBM Domino 9 PDF Export Tool 1.0        
IBM Domino 9 PDF Export Tool to export Lotus Domino DXL Database.
          Sirius Computer Solutions Wins 2015 IBM Choice Award for Top Business Partner in Growth and Business Transformation – National        

Awarded to US National IBM Business Partner for highest achievement in growth and transformation

(PRWeb February 12, 2015)

Read the full story at

          Sirius Computer Solutions Wins 2015 IBM Beacon Award for Outstanding Technical Support Services        

IBM Beacon Awards recognize select Business Partners for delivering advanced cloud, analytics, mobile, security and social solutions.

(PRWeb February 12, 2015)

Read the full story at

          Sirius Partners with IBM to Deliver Software-Based Solutions to Federal Clients        

Initial focus includes U.S. Air Force and Department of Energy

(PRWeb September 15, 2014)

Read the full story at

          Sirius Is First Worldwide Solutions Integrator Approved to Offer Services for IBM PureFlex Systems and Flex System Technologies        

Sirius Computer Solutions, Inc., a leading national IT solutions integrator and IBM Premier Business Partner, has invested in IBM PureFlex Systems and IBM Flex System technologies to become the first partner qualified to perform configuration and installation services.

(PRWeb March 19, 2013)

Read the full story at

          Sirius Solutions Help Retail Clients See Increased Online Sales and Efficiency; Sirius and IBM Technologies Increase Overall Business Performance        

As more retail clients face the pressure of staying ahead of the latest consumer trends to identify new market opportunities while increasing business efficiency, many are turning to Sirius and IBM to address these key business challenges. As part of today's news, Sirius Computer Solutions, Inc., a national IT solutions integrator, announced that three of its retail clients are seeing significant performance improvements by delivering increased online sales and efficiency.

(PRWeb March 22, 2012)

Read the full story at

          Sirius Helps Clients Address Tough Data Storage Challenges -- IBM Storage Skills and Certifications Key to Achieving Client Success        

Sirius Computer Solutions, Inc., a leading national IT solutions provider, is strengthening its relationship with IBM and its storage business to better serve the thousands of data center clients who have come to rely on Sirius for expertise and support.

(PRWeb March 08, 2012)

Read the full story at

          Sirius Computer Solutions Achieves IBM System Storage Specialty        

Sirius is the first IBM partner with the IBM Storage Specialty: Making investments in IBM Storage skills to help clients realize the benefits of optimized, virtualized IBM storage solutions.

(PRWeb October 02, 2011)

Read the full story at

          Sirius Achieves IBM Industry Solutions Specialties for Retail and Insurance, as well as Cross-Capability Specialty for Security        

Certification Recognizes Industry-Specific Solutions and Cross-Industry Capabilities Combining IBM Software, Sirius Intellectual Property and Proven Best Practices

(PRWeb June 01, 2011)

Read the full story at

          Sirius Computer Solutions Wins IBM Beacon Award in the Global Category: Overall Technical Vitality        

IBM Beacon Award Recognizes IBM Business Partners for Outstanding Achievements in Business and Technology Excellence

(PRWeb February 21, 2011)

Read the full story at

          Sirius Computer Solutions is the First IBM Partner Approved for the IBM System x Specialty         

Investments in skills and high level of market performance recognized

(PRWeb August 30, 2010)

Read the full story at

          RE[2]: I don't trust Canonical        
When it comes to operating systems and software? I trust companies who back their claims with actual software and don't live in a mythical world. I trusted SUN, I trust IBM and Google, I half trust Oracle and Microsoft.
          Sirius Computer Solutions Receives Two Awards for North American Business Partner Excellence and Business Leadership         

Sirius Computer Solutions, a national technology solutions provider and IBM Premier Business Partner, was recognized with the IBM Business Leadership Award for the thirteenth consecutive year in addition to the IBM North America Business Partner Excellence Award for 2009.

(PRWeb June 02, 2010)

Read the full story at

          Sirius Computer Solutions Wins IBM Beacon Award in the Global Category: Overall Technical Vitality         

IBM Beacon Award Recognizes IBM Business Partners for Outstanding Achievements in Business and Technology Excellence

(PRWeb May 20, 2010)

Read the full story at

          Gary Kovacs nombrado nuevo CEO de Mozilla Corp.        
Por Percy Cabello Seis meses ha tardado Mozilla en encontrar el sucesor ideal [en] para John Lilly, quien fuera CEO de Mozilla Corporation de los últimos dos años y medio y COO con anterioridad. Gary Kovacs, ex manager general y vicepresidente de dispositivos móviles de Adobe, y empleado de IBM por 10 años, ha sido […]
          IBM lleva la computación cognitiva de Watson al terreno deportivo        
El Gigante Azul, junto con un trío de nuevos partners y Watson, potenciará las funcionalidades de las aplicaciones de ayuda en la prevención de lesiones deportivas, en el cambio de la naturaleza del entrenamiento en disciplinas deportivas como el golf, y en transformar la experiencia de los fans que acuden al terreno de juego a presenciar las competiciones.
          Dodging the death knell of obsolescence (cough, cough, newspapers)        
My new sport is awaiting the first serious reporting by a NYT or IHT journalist about the fate and fortunes of their own employer, the NYT Company.

Today we're still at the polite 'cough, cough' stage, but watch this space.

Dodging the death knell of obsolescence
By Catherine Rampell
Sunday, November 16, 2008
By some logic, there is no earthly reason why bicycles should still exist.
They are a quaint, 19th-century invention, originally designed to get someone from point A to point B. Today there are much faster, far less labor-intensive modes of transportation. And yet hopeful children still beg for them for Christmas, healthful adults still ride them to work, and daring teenagers still vault them down courthouse steps. The bicycle industry has faced its share of disruptive technologies, and it has repeatedly risen from the ashes.
Other industries (cough, cough, newspapers) should be so lucky.
For some businesses, the current economic downturn is a bit problematic. For those already facing fundamental threats — like newspapers and American automakers — it could accelerate the path to what, it has been said, might be death.
But history offers some reason for optimism. Industries like bicycle manufacturers, when faced with a threat of obsolescence, managed to creatively reinvent themselves. What lessons do they provide for struggling industries?
There's no clear route to cheating industrial death. Those companies that have survived technological challenges have in common some combination of perseverance, creativity, versatility and luck. Their precise strategies vary. Some made sweeping changes, and abandoned their original products entirely; others were able to endure by changing little but their marketing.
Take, for example, a certain class of luxury goods. Inventors have created more user-friendly writing implements than fountain pens, more dependable time-keeping devices than mechanical wristwatches, and more efficient ways to heat houses than fireplaces. Yet, many consumers still gladly opt for the cultural cachet of technologically more primitive goods.
These older technologies have survived by recasting themselves as luxuries and by marketing their sensory, aesthetic and nostalgic appeal. Their producers emphasize their experiential rather than functional qualities.
In short, they were Ye-Olde-ed, and a boutique-y rump of the original industry now survives.
The popularity of newspapers the day after Barack Obama's election — when they were probably valued more as historical artifacts than as sources of news — had a whiff of this development.
But newspapers were not designed with maximum tactile pleasure and durability in mind. "Newspapers were always this scrubby sheet of paper with ink that came off, and that deteriorate in a few hours," said Gregory Clark, an economic historian at the University of California at Davis.
For that reason, he said, it is somewhat difficult to imagine newspapers remarketing themselves as a luxury product.
Perhaps there are other qualities unique to newspapers that can be exploited, just as previous creative industries have discovered when facing disruptive technologies.
Photography might have killed Western painting and portraiture, for example, because painters knew they couldn't compete with the speed and accuracy with which photographs represented the visual world. Instead, many painters and other traditional visual artists innovated with more abstract and less representational images.
Similarly, television might have crowded out movies. Instead, Hollywood focused on bigger, more spectacular, more risqué films — the stuff that television couldn't deliver.
Some survivor industries discovered new customer bases.
Bicycles, for example, grew in popularity in the United States through the late 19th century, peaking in the 1890s, but the craze weakened around the turn of the last century. After the First World War, manufacturers discovered a new youth market, which lasted until the baby boomers were kids. Then bikes fell out of favor again, but were revived during the 1970s when those boomers, and their kids, became more interested in personal exercise and gas-free, environmentally friendly modes of transportation.
Radio is an even better example. In its 1940s heyday, it was the center of U.S. national entertainment. Then, in the 1950s, television began stealing radio's biggest stars, like Jack Benny and Abbott and Costello. National advertisers — radio's revenue base — followed the talent. "Radio, actually shockingly, was pronounced dead in 1953," says Susan Douglas, chair of the communication studies department at the University of Michigan.
But the industry revitalized itself by tapping into new markets. First it stumbled upon the youth music market, congregating around the car radio. Then radio innovators found other neglected markets, including underground music movements, longer-form news and talk radio. Along the way, radio's business model changed; the medium cultivated new niche advertisers, rather than national advertisers, to pay for its new niche programming.
For some companies, nestling into a marketing nook wasn't enough. They made radical transitions to new products and new industries, and survived through evolution, not preservation.
"Much of the history of the 'American system of manufacturing' is the story of inventors moving from a declining industry to a new expanding industry," says Petra Moser, an economic historian at Stanford who studies innovation. "Inventors take their skills with them."
Gun makers learned to make revolvers with interchangeable parts in the mid-19th century, Moser says. Then those companies (and some former employees, striking out on their own) applied those techniques to sewing machines when demand for guns slackened. Later, sewing machine manufacturers began making woodworking machinery, bicycles, cars and finally trucks.
Some famous companies have taken more improbable turns, either because their original business was fading or because they saw better growth opportunities. Before making cellphones, Nokia made paper. Before making cars, Toyota made looms (a Toyota textile business still exists). Corning is still a specialty glass and ceramics company known to most consumers for its tableware, but for more than a century it has also profited from uses as diverse as early light bulbs, space, defense and fiber-optic cable.
Some superstar companies managed to reinvent themselves multiple times — IBM, for example. Over a century, the company has nimbly transitioned from punch-card accounting equipment (its original business) to large mainframe computers, to personal computers, and finally to information-technology — each time facing skepticism from analysts who thought IBM might be too big, too old or too entrenched to adapt.
These companies survived by keeping their ears to the ground. New customer needs emerged, and smart corporations positioned themselves to meet them. "You have to be willing to walk away from the things that have made you great," says Scott Anthony, president of Innosight, which consults with companies (including newspapers and automotive businesses) on how to foster a culture of innovation. He argues that the incumbents in the newspaper industry were caught sleeping during the initial meteoric growth period of Web sites like Wikipedia because the avenue for innovation — letting crowds rather than experts aggregate and filter data — seemed so antithetical to what newspapers did well.
Of course, straying too far from what a company does well has also proven dangerous. "If you look at the history of firms that have tried to diversify their businesses, you'll see it's virtually an impossible thing to do," says David Hounshell, a historian at Carnegie Mellon University who studies technology and social change. "Usually when a firm announces a program to diversify, they've pretty much written their death warrant."
Newspapers have faced challenges before and have adapted — including through efforts at diversification. Can these historical precedents teach newspapers how to defeat the economic forces of technological change once again?
Like previous industries fearful of obsolescence, newspapers can either develop a new product, or find a way to remarket and remonetize the old one. Right now, newspapers are doing a little of both: They're adapting their product to the Web to attract new audiences, and they're trying to re-monetize by delivering more targeted advertising.
Meanwhile, we've already seen some of the "destruction" half of Joseph Schumpeter's famous "creative destruction" paradigm, with many newspapers cutting staff and other production costs. Unfortunately for newspapers, historians say, the survivors in previous industries facing major technological challenges were usually individual companies that adapted, rather than an entire industry. So a bigger shakeout may yet come.
But perhaps the destruction will lead to more creativity. Perhaps the people we now know as journalists — or, for that matter, autoworkers — will find ways to innovate elsewhere, just as, over a century ago, gun makers laid down their weapons and broke out the needle and thread. That is, after all, the American creative legacy: making innovation seem as easy as, well, riding a bike.

"Books about cosmopolitan urbanites discovering the joys of country life are two a penny, but this one is worth a second glance. Walthew's vivid description of the moral stress induced by his job as a high-flying executive with the International Herald Tribune newspaper is worth the cover price alone…. Highly recommended."
The Oxford Times
Ian Walthew

'I read
A Place in My Country with absolute unalloyed delight. A glorious book.'
Jeremy Irons (actor)

‘Ian Walthew was a newspaper executive with a career that took him round the world, who one day did a mad thing. He saw a for-sale sign on a cottage in the Cotswolds, bought it, resigned and moved in. For the first few weeks he just lay on the grass in a daze. Then he started talking to his neighbours and digging into the rich history of this beautiful part of England. Out of his inquiries grew this affecting and inspiring memoir.What sets it apart from others of its ilk is the author’s enviable immunity to cliché and his determination to love his homeland better than he used to.
His elegiac account of relearning how to be an Englishman should be required reading for anyone who claims to know or love this country.’ Financial Times
Ian Walthew

For more reviews visit

Business trip to the IHT in Paris or friends and family com
ing to visit you? Fed up with hotels? Bring the family (sleeps 6) to superb Montmartre apartment - weekend nights free of charge if minimum of 3 work nights booked;. Cable TV; wifi, free phone calls in France (landlines); large DVD and book library; kids toys, books, travel cot and beds; two double bedrooms; all mod cons; half an hour to Neuilly and 12 mins walk from Eurostar. T&E valid invoices.

10% Discount for NYT employees; 15% Discount for IHT Employees

International Herald Tribune
New York Times
The NYT Company

          Magento Marktanteil - Das Wachstum geht weiter        
Die letzte eCommerce-Umfrage im Oktober 2011 hat 4% mehr Shop-Systeme gefunden als die Umfrage noch im Juni, die insgesamt 26.594 Shop-Systeme gezählt hat. Die Analyse geht über die Top 1 Million Homepages bei Alexa und analysiert sie, indem sie Features von 32 verschiedenen eCommerce Systemen feststellt.

Die beiden Systeme Zen Cart und Magento konnten jeden Monat ein enormes Wachstum von 18% verzeichnen. Jedoch ist Magento der wirkliche Gewinner mit 20% aller gefundenen Shop-Systeme. Wir konnten über die letzten 12 Monate ein kontinuierliches Wachstum bei Magento feststellen und es gibt kein Zeichen der Verlangsamung.

osCommerce ist weiterhin die viert beliebteste Plattform, aber der Abwärtstrend mit 200 Seiten weniger im Vergleich zur Juni Umfrage hält weiter an.

Es gibt eine Vielzahl anderer eCommerce Plattformen, die über die letzten 12 Monate gewachsen sind, einschließlich Interspire, OpenCart, PrestaShop und UberCart.

Die Präsenz der Enterprise Edition von Magento ist um über ein Drittel von 274 auf 378 Seiten gestiegen. Für Magento Entwickler ist es natürlich überaus ermutigend zu sehen, dass mehr und mehr Firmen auf diese Plattform umziehen.

Am Rande sei erwähnt, dass der Hosting-Standort jeder Seite überprüft wurde. Von den 26.000 Seiten wurden über 10.000 in den Vereinigten Staaten gehostet. Die zweit meisten Hostings weist Deutschland mit etwas unter 2.000 Seiten auf.

Beliebteste eCommerce-Software (Top 1 Million Seiten) - Oktober 2011
Diesen Monat entschied ich mich auch auf die Veränderung der Verteilung der Top 100.000 Seiten zu sehen. Ich habe erwartet, dass Systeme wie IBM, WebSphere und GSI commerce den Enterprise Markt bestimmen. In der Tat hält sich Magento mit 232 Shops sehr gut und nur 45 davon wurden mit der Professional/Enterprise Edition gebaut.

Beliebteste eCommerce-Software (Top 100.000 Seiten)- Oktober 2011

Hier sind die gesamten Ergebnisse für alle 32 eCommerce Systeme. Historische Werte wurden, sofern verfügbar, ebenso berücksichtigt.

eCommerce PlatformNovember
Zen Cart 1556 1533 3167
2683 2701 2753 2683,2701,2753
osCommerce 3123 3033 2554 2334 3123,3033,2554,2334
PrestaShop 852 1079 1302 1518 852,1079,1302,1518
706 992 1305 706,992,1305
Volusion 889 906 1099
Yahoo Stores

1315 977 1315,997
Interspire 605 739 819 918 605,739,819,918

667 831 667,831
OpenCart 335 492 660 757 335,492,660,757
WP e-Commerce
779 754 747 779,754,747
X-Cart 733 740 659 639 733,740,659,639
Miva Merchant 710 894 802 464 710,894,802,464
IBM WebSphere Commerce
223 1011 396 223,1011,396

397 383 397,383
OXID eSales 310 314 305 311 310,314,305,311
Shopify 122 143 204 251 122,143,204,251

209 249 209,249
Actinic 290 229 237 221 290,229,237,221

171 193 171,193
118 140 182 118,140,182

275 132 275,132

91 112 91,112
nopCommerce 52 64 81 111 52,64,81,111
GSI Commerce 48 59 64 63 48,59,64,63
ekmPowershop 71 58 65 62 71,58,65,62

47 56 47,56
27 34 47 27,34,47
Big Cartel 28 41 44 47 28,41,44,47

12 16 12,16
TomatoCart 11 16 9 15 11,16,9,15

Alle Kennzeichen sind diesen Monat gleich geblieben. Nur ein Bug wurde gefixt, der ein paar falsche Ergebnisse zur Folge hatte. Dieser verursacht eine Anomalie, welche sich speziell bei IBM, WebSphere, Miva Merchant und ProStores bemerkbar macht, da bei dieser Umfrage alle ein paar Shops verlieren.

Beliebteste eCommerce Seiten im Vergleich über die letzten 12 Monate
– Oktober 2011

Die folgende Grafik ist begrenzt auf die Top 18 eCommerce-Systeme und zeigt einige Seiten, die über die Zeit identifiziert wurden. Sie geht über die gleiche Achse um die relativen Werte vergleichen zu können.

Artikel im Orginal zu lesen unter:
1793-1871.-Charles Babbage : Adelantó la situación del hardware computacional 1843 Lady Ada Augusta Lovelace : La primera programadora.
1896.-El Dr. Hollerith: formó una compañía para desarrollar una máquina que se conoce como IBM.
1937.-Atanasoff Y Berry: desarrolló la primera computadora digital electrónica.
1938.-Konrad Zuse: produce la primera computadora que utiliza código binario.
1941.-Konrad Zuse: Construyó la primera computadora programable.
1943.-se construyó la E.N.I.A.C; la primera computadora totalmente electrónica.
1944.- Comienza la construcción del primer computador americano.
1947.-Primer computador electrónico ENIGMA.
1949.-John Von Neumann construyó la (Computadora Automática Electrónica de Almacenamiento Diferido).
1956.-IBM desarrolla el primer disco duro llamado RAMAC.
1957.-Un equipo de IBM dirigido por John Backus, creo el primer lenguaje de programación llamado FORTRAN, formulado para el IBM
704.1959.-Digital Equipment Corporation desarrolla la PDP-1, la primera computadora comercial equipada con teclado y monitor.
1959-1964.-Transistor Compatibilidad Limitada.
1966.-Aparecen los discos floppies, mouse y uso de "ventanas".
1974.- EL PRIMER COMPUTADOR DE ESCRITORIO, comercializado con el microprocesador Intel 8080.
1977.- MODELO I con periféricos externos y MODELO II con video incorporado.
1980.-COMMODORE 64.1981.- IBM PC.
1985.-Microsoft lanza un nuevo sistema operativo con el nombre de WINDOWS.
1990.-Tim Berners-Lee ideó el hipertexto para crear el World Wide Web (www) una nueva manera de interactuar con Internet.
2000.-Se fabrica el procesador INTEL PENTIUM 4, uno de los mejores en ese año.
2006.-Lanzamiento de Windows Vista.
2009.-Microsoft lanza su nueva versión del sistema operativo llamado WINDOWS 7.
          Security Content Developer - IBM - Fredericton, NB        
Experience in enterprise application deployment and administration, preferably on security products. Experience in Systems Administration (Windows and Linux)...
From IBM - Wed, 02 Aug 2017 20:55:03 GMT - View all Fredericton, NB jobs
          FashionTherapy: abiti, scarpe e aiutanti magici        
A volte capita. Che accessori, make-up, abiti e scarpe si trasformino in aiutanti magici che ci aiutano a tirare fuori risorse che teniamo nascoste. Ho parlato di questo tema poco più di un anno fa, nell’ambito di un’iniziativa di IBM … Continue reading
          Media, Genealogy, History        
Matthew G. Kirschenbaum

Remediation is an important book. Its co-authors, Jay David Bolter and Richard Grusin, seem self-conscious of this from the outset. The book’s subtitle, for example, suggests their intent to contend for the mantle of Marshall McLuhan, who all but invented media studies with Understanding Media (1964), published twenty years prior to the mass-market release of the Apple Macintosh and thirty years prior to the popular advent of the World Wide Web. There has also, I think, been advance anticipation for Remediation among the still relatively small coterie of scholars engaged in serious cultural studies of computing and information technology. Bolter and Grusin both teach in Georgia Tech’s School of Language, Communication, and Culture, the academic department which perhaps more than any other has attempted a wholesale make-over of its institutional identity in order to create an interdisciplinary focal point for the critical study of new media. Grusin in fact chairs LCC, and Bolter, who holds an endowed professorship at Tech, is a highly-regarded authority for his work on the hypertext authoring system StorySpace and for an earlier study, Writing Space: The Computer, Hypertext, and the History of Writing (1992), to which Remediation is a sequel of sorts. [Bolter’s book is reviewed by Anne Burdick in ebr, eds.] The book therefore asks to be read and received as something of an event, an extended statement from two senior scholars who have been more deeply engaged than most in defining and institutionalizing new media studies.

Much of Remediation’s importance is lodged in the title word itself. New media studies has been subjected to a blizzard of neologisms and new terminologies - many of them over-earnest at best - as scholars have struggled to invent a critical vocabulary adequate to discuss hypertexts and myriad other artifacts of digital culture with the same degree of cogency found in a field such as film studies. Bolter and Grusin clearly want “remediation” (the word) to stick, and the volume’s rhetorical momentum is often driven by simple declarative clauses like “[b]y remediation we mean…” and “[b]y remediation we do not mean…” Though the cumulative weight of these phrasings helps remind readers that they are in the presence of two critics in full command of their subject matter, the repetitive stress on “remediation” also produces some odd moments, such as this one from the preface:

It was in May 1996, in a meeting in his office with Sandra Beaudin that RG was reported to have coined the term remediation as a way to complicate the notion of “repurposing” that Beaudin was working with for her class project. But, as most origin stories go, it was not until well after the fact, when Beaudin reported the coinage to JB, who later reminded RG that he had coined the term, that the concept of “remediation” could be said to have emerged. Indeed, although the term remediation was coined in RG’s office, neither of us really knew what it meant until we had worked out together the double logic of immediacy and hypermediacy. (viii)

[ Bolter’s more recent collaboration with Diane Gromala, Windows and Mirrors (2003) applies the concept of immediacy/hypermediacy to graphic design. See Jan Baetens’ ebr review ]

This is writing that itself bears the mark of multiple mediations, from the willfully passive construction of its syntax (“that RG was reported to have coined…”) to the flutter of the keyword remediation from an italicized presentation to scare quotes and back again. I dwell on such details not to be clever, but rather because those visible stress-marks, and the placement of this vignette in the volume’s preface (where it is labeled, tongue-in-cheek, as an “origin story”) both underscore the extent to which language itself is about to be recycled and repurposed in the project that follows. For remediation is not in fact a neologism or a new coinage but rather a paleonym, a word already in use that is recast in wider or different terms: remediation is a word commonly encountered in business, educational, and environmental contexts to denote remedy or reform. Bolter and Grusin do acknowledge this later in the book by discussing remediation’s usage by educators (59), but “remediation” (the word’s) status as a paleonym itself becomes questionable when we realize that Bolter and Grusin clearly expect Remediation (the book) to perform exactly this kind of reformative work - most broadly as a corrective to the prevailing notion of the “new” in new media.

For all of this anxiety surrounding its presentation and pedigree, remediation in Bolter and Grusin’s hands is a simple (but not simplistic) concept, and therein lies its appeal:

[W]e call the representation of one medium in another remediation, and we will argue that remediation is a defining characteristic of the new digital media. What might seem at first to be an esoteric practice is so widespread that we can identify a spectrum of different ways in which digital media remediate their predecessors, a spectrum depending on the degree of perceived competition or rivalry between the new media and the old. (45)

This is, as Bolter and Grusin acknowledge, an insight also shared by McLuhan, who famously declared that the first content of any new medium must be a prior medium. But whereas McLuhan once divided the media sphere into “hot” and “cool” media based on the degree of participation they required (non-participatory media were, somewhat paradoxically, “hot and explosive” in McLuhan’s lexicon, while interactive media were termed “cool”), Bolter and Grusin parse various media forms against what they term the logics of immediacy and hypermediacy.

Immediacy denotes media that aspire to a condition of transparency by attempting to efface all traces of material artifice from the viewer’s perception. Immersive virtual reality, photo realistic computer graphics, and film (in the mainstream Hollywood paradigm) are all examples of media forms that obey the logic of immediacy - the expectation is that the viewer will forget that he or she is watching a movie or manipulating a data glove and be “drawn into” the environment or scene that is depicted for them. Hypermediated phenomena, by contrast, are fascinated by their own status as media constructs and thus call attention to their strategies of mediation and representation. Video games, television, the World Wide Web, and most multimedia applications subscribe to the logic of hypermediacy. And, as Bolter and Grusin are quick to claim, “our two seemingly contradictory logics not only coexist in digital media today but are mutually dependent” (6). This co-dependency inaugurates what they refer to as the “double logic of remediation,” which finds expression as follows: “Each act of mediation depends on other acts of mediation. Media are continually commenting on, reproducing, and replacing each other, and this process is integral to media. Media need each other in order to function as media at all” (55).

Once articulated, the ideas behind remediation are quickly grasped and readers may find themselves seeing (I stress because Bolter and Grusin’s critical orientation is overwhelmingly visual) remediations everywhere. It also becomes clear, as Bolter and Grusin themselves suggest, that remediation is the formal analogue of the marketing strategy commonly known as repurposing, whereby a Hollywood film (say) will spawn a vast array of product tie-ins, from video games to action figures to fast-food packages and clothing accessories. This practice raises a daunting set of questions for those concerned with matters of textual theory, for if we grant that a film (or an action figure) can be a text, we are then obliged to re-evaluate much of what we think we know about textual authority and textual transmission in this late age of mechanical reproduction - by what formal, material, or generic logic could we define the ontological horizon of the repurposed text known as “Star Wars?” Likewise, when one refers to “Wired,” is one speaking of just the printed newsstand version of the magazine or is one speaking of the multivalent media property that now cultivates a variety of vertically integrated distribution networks, including: an imprint for printed books about cyberculture, HardWired; an online forum and Web portal, HotWired; separate Web presences for the magazine itself as well as affiliated online ventures (which include WiredNews), LiveWired, and Suck); and two search engines, HotBot and NewsBot. That recognition of this broader media identity is central to any discussion of Wired the magazine is dramatized by the fact that as of this writing the URL deflects visitors from the site of the magazine proper to the aforementioned WiredNews - which only then offers a subordinate link to the Web presence for the newsstand version of Wired (which is itself of course an electronic remediation of the printed content). In retrospect, it seems odd that Bolter and Grusin do not make more of Wired, both because of the complex media ecology outlined above and because in it we have an artifact of print culture that, largely on the basis of graphic design and strong marketing, has remediated the experience of “cyberspace” so successfully that the word “wired” itself has become a popular synecdoche for the Information Age.

Some extended case studies of that sort (MTV would have been another natural) might have added much to the book, but instead its middle section is taken up by more generic surveys of various media forms - computer games, photo realistic graphics, film, television, virtual reality, the World Wide Web, and others - and these are a mixed lot. The chapters on computer games, graphics, television, and film are generally strong. Bolter and Grusin have an enviable feel for the subtle relationships that obtain between media forms, and they are at their best during moments such as a discussion of Myst when they argue convincingly that the game - frequently remarked upon for the “realism” of its graphics - succeeds not via the logic of immediacy, but rather by remediating the immediacy of Hollywood film; they press the point home by observing that there are in fact hundreds of examples of video games adapted from mainstream films (98). Their argument about virtual reality’s lineage in film is equally suggestive: “One way to understand virtual reality, therefore, is as a remediation of the subjective style of film, an exercise in identification through offering a visual point of view… In their treatments [ Brainstorm, Lawnmower Man, Johnny Mnemonic, Disclosure, Strange Days ] Hollywood writers grasped instantly (as did William Gibson in his novel Neuromancer) that virtual reality is about the definition of the self and the relationship of the body to the world” (165-166). What is compelling here is not so much the notion that virtual reality is about “the definition of self and the relationship of the body to the world,” but rather the confidence with which Bolter and Grusin are able to identify a specific filmic technique - the subjective camera, prominent in all the titles mentioned above - and align it with the popular rhetoric surrounding virtual reality, thereby foregrounding the artificial imperatives of both media forms.

But at times the middle chapters also seem sparsely developed. That same chapter on virtual reality, for example, is only seven pages long (including illustrations), and it includes no discussion of any functional VR systems beyond mention of research by Georgia Tech’s Larry Hodges. Likewise, the only electronic artist to receive any individual treatment in the chapter on digital arts is Jeffrey Shaw, who is perhaps best know for an installation piece entitled The Legible City, now a decade old. At other times, elements of the historical record which it would have been desirable to have on hand are simply missing. A discussion of the video game Pong, for example, offers the tantalizing suggestion that its fundamentally graphical orientation, compared to contemporary UNIX and DOS command line interfaces, “suggested new formal and cultural purposes for digital technology” (90). Yet we are not given any specific date for Pong’s first release, or for the releases of its many subsequent versions and variations (which it would have been interesting to track across different platforms); nor do we learn who first programmed the game, or where, or why. Absences of this kind detract from the usefulness of the middle sections as basic references for students of new media.

Given the scope of the attempted coverage in Remediation ’s middle sections - where the topics range from Renaissance painting and animated film to telepresent computing and “mediated spaces” (e.g., Disneyland) - lapses of the kind I note above are perhaps inevitable. And indeed, very early on in the book Bolter and Grusin offer a familiar kind of disclaimer: “We cannot hope to explore the genealogy of remediation in detail. What concerns us is remediation in our current media in North America, and here we can analyze specific texts, images, and uses” (21). But this emphasis on the “specific” is itself a scholarly move that, as Alan Liu and others have demonstrated, bears with it deep implications for any critical project conducted under the broad sign of cultural criticism, a point to which I will return (below).

But some remaining features of the book deserve notice first: Remediation is lovingly illustrated, and Bolter and Grusin deserve credit for the care with which the images were selected and reproduced. The juxtaposition of the front page of USA Today’s printed edition with the home page of USA Today on the web (40-41) or the comparison of stills from a 1980 CNN air check with a more contemporary broadcast format from CNN in 1997 (190-191) do as much to underscore the essential rightness of the core remediation concept as any number of expository passages in the text. The first and third sections of the book also include reference pointers to relevant passages from the survey of media forms in the middle section - these are “the printed equivalent of hyperlinks” (14), and some readers may find them occasionally convenient. Remediation ‘s third and final section examines logics of remediation in relation to contemporary conceptions of the self (readers who have already done their homework with Sandy Stone or Sherry Turkle may find themselves skimming these pages). The bibliography, with about 175 entries, is useful. And finally, there is the obligatory glossary; it will mark a significant milestone in the maturity of new media studies as a discipline when one can publish a book in the field without feeling the need to define for the lay-reader “virtual reality” or “MOO” (or “media,” for that matter: “Plural of medium” [274]).

Near the end of the book, Bolter and Grusin offer an account of the media coverage of Princess Diana’s funeral precession: “Because the funeral itself occurred for American audiences in the middle of the night, CBS decided to run a videotape of the whole ceremony later in the morning. At that same time, however, the precession was still carrying Diana’s body to its final resting place. The producers of the broadcast thus faced the problem of providing two image streams to their viewers” (269). The solution CBS adopted was to divide the screen into two separate windows, one displaying the funeral ceremony and the other the procession. Bolter and Grusin point out that this move marks a shift from the desire for immediacy and “authenticity” of experience that normally governs live TV to a logic of hypermediacy that places the emphasis on the media apparatus itself; but the more interesting point, I think, is that this particular broadcast solution was viable because CBS could count on its audience having already been exposed to bifurcated screen-spaces through the assimilation of the computer desktop and its attendant interface conventions into the cultural mainstream. Bracketing technical considerations, it seems reasonable to argue that CBS could not have opted for the two-window solution in an earlier era of television because the visual environment would have simply been too alien from their viewers’ expectations. Bolter and Grusin go on to note that, “other and perhaps better examples (both of hypermediacy and remediation) will no doubt appear, as each new event tops the previous ones in its excitement or the audacity of its claims to immediacy” (270). Had this closing chapter been written today, Bolter and Grusin would have almost certainly chosen as their example the multi-window displays that facilitated the so-called “surreal” split-screen television coverage of the House Judiciary Committee’s impeachment hearings and Operation Desert Fox (the American and British air strikes on Iraq) in December of 1998.

That the conflicting logics of immediacy (in the desire for live “eyewitness” coverage of two major news events transpiring simultaneously) and hypermediacy (in the spectacle of video feeds from Washington and Baghdad both on the screen at the same time, each in a separate content window, the display filled out by a lurid background “wallpaper” graphic) manifested themselves so dramatically in one of the most notable media events of recent memory surely confirms the usefulness of remediation as a critical armature for contemporary media studies. But it is worth noting that Bolter and Grusin explicitly describe their technique in Remediation as genealogical (“a genealogy of affiliations, not a linear history” [55]), and therefore I’d like to close this review with some additional words about genealogy, and its suitability to new media studies by contrast with other varieties of historicism.

Genealogy as a critical mode comes to us from Foucault; it is most closely associated with his later books such as Discipline and Punish and the three volumes of the History of Sexuality. Genealogy is distinct from Foucault’s other famous method, archeology, deployed most fully in works like The Order of Things and The Birth of the Clinic. Foucault’s most sustained articulation of genealogy is to be found in a 1971 essay entitled “Nietzsche, Genealogy, History,” whose opening lines are these: “Genealogy is gray, meticulous, and patiently documentary. It operates on a field of entangled and confused parchments, on documents that have been scratched over and recopied many times” (76). A few pages later, we read:

Genealogy does not resemble the evolution of a species and does not map the destiny of a people. On the contrary, to follow the complex course of descent is to maintain passing events in their proper dispersion; it is to identify the accidents, the minute deviations - or conversely, the complete reversals - the errors, the false appraisals, and the faulty calculations that gave birth to those things that continue to exist and have value for us; it is to discover that truth or being does not lie at the root of what we know and what we are, but the exteriority of accidents. (81)

Bolter and Grusin acknowledge this same essay, and indeed quote from it in their first footnote. Yet it seems questionable how much the “genealogy” of Remediation really resembles what Foucault imagined by the term. True, Bolter and Grusin’s narrative of media forms is not linear (or rather, it is not chronological), but their narrative is also “documentary” only in the most casual sense and it operates at a level of detail far removed from Foucault’s trademark archival research. Indeed, of the many books published on topics related to new media studies in recent years, none of them, it seems to me, has yet matched the level of documentary (archival) research evident in a work such as Michael A. Cusumano and David B. Yoffie’s Competing on Internet Time: Lessons from Netscape and its Battle with Microsoft (1998). A typical passage from Cusumano and Yoffie (who are business professors) reads like this:

In August 1994, the Seattle-based start-up Spry became the first company to market a commercial version of Mosaic. At least half a dozen non-NCSA-based browsers were also available or in the works. In addition to Netscape’s Navigator, competitors also included Cello, developed at Cornell; BookLink’s InterNet Works; the MCC consortium’s MacWeb; O’Reilly and Associates Viola; and Frontier Technologies’s WinTapestry. By early 1995, PC Magazine declared that 10 Web browsers were “essentially complete”[…] In April 1995, Internet World counted 24 browsers, and by the end of the year CNET had found 28 browsers worthy of review. Very few of those products had any appreciable market share. (95-96)

How soon we forget. Cello, WebTapestry, even Mosaic. Where are they now? Whole generations of software technologies (compressed with the week- and month-long micro-cycles of “Internet Time”) are already lost to us. But surely this level of detail - conspicuous in the InterCapped names of bygone products and technologies, punctuated by the antiquarian version numbers of specific hardware and software implementations - ought to be a key element of any historical method, genealogical or otherwise, that critics working in new media studies bring to bear.

Let me suggest that the start-up work of theorizing digital culture has by now largely been done, and that serious and sustained attention to archival and documentary sources is the next step for new media studies if it is to continue to mature as a field. Freidrich Kittler’s Discourse Networks 1800/1900 already does some of this work. And we could also do worse than Internet Time for a summation of the pace of scholarship in new media studies to date, with fresh books (books: the medium signifies) on matters cyber, virtual, or hyper appearing almost weekly. But where in all this are the careful analyses of the white papers and technical reports (for example) that must lie behind the changing broadcast strategies Bolter and Grusin point to at CNN? Where are the interviews with the network’s executives and with their media consultants and market analysts? Rather than speculate broadly about computer graphics or theories of digital reproduction, why not perform a detailed case study of one particular data format, such as JPEG or GIF (both of which have a fascinating history) or a particular software implementation such as QuickTime, which has been enormously influential to multimedia development as it has evolved through multiple versions and generations? Certainly there are practical constraints that might mitigate against such projects: would Apple unlock its technical reports and developers’ notes on QuickTime for a scholar writing a book? It is hard to know, but: Netscape did it for Cusumano and Yoffie.

A few more thoughts in this vein. Compared to other scholarly fields, new media studies has thus far operated within relatively limited horizons of historicism. Historical perspective in books on digital culture generally takes one of two forms: it is either broadly comparative or it is transparently narrative. Bolter’s earlier book, Writing Space, is a classic example of the former mode, contextualizing hypertext (very usefully) within a much longer history of writing. Sandy Stone’s pages describing the final days of the Atari Lab in The War of Desire and Technology at the Close of the Mechanical Age is an example of the latter narrative mode, as is the writing in such pop-history books as Simon and Schuster’s Where Wizards Stay Up Late: The Origins of the Internet. But both the comparative and the narrative modes encourage a relatively casual kind of historiographic writing. N. Katherine Hayles’ just-published How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics, which I am reading now, is perhaps the beginning of something new, offering a more rigorous kind of historical inquiry. thREAD to the Linda Brigham’s review of Hayles But Hayles still does not approach the level of self-reflexivity evident in a work like James Chandler’s England in 1819: The Politics of Literary Culture and the Case of Romantic Historicism, published last year by Chicago, in which Chandler historicizes history itself as a peculiarly Romantic category of knowledge, while simultaneously undertaking a meticulous investigation of the events of a single pivotal year in the development of British Romanticism. A brief passage from the preface, to suggest the flavor of the volume:

Within part 1, the first section, “Writing Historicism, Then and Now,” tries to establish a way of talking about “dated-specificity” in literary-cultural studies that makes patent the repetition between the “spirit of the age” discourse of British Romanticism and the contemporary discourse of the “return to history” in the Anglo-American Academy. The second section…moves from the notion of historical culture implicit in that “dated specificity” to consider the representation practices that such a notion of culture presupposes or demands… Then, having established how one might understand England in 1819 as a historical case, its literature as a historicizing casuistry, I turn…to explicate a series of works, all produced or consumed in that year, as cases in respect to that larger frame of reference. (xvi-xvii)

Chandler is ultimately ambivalent about the academy’s current insistence on “dated specificity” (including the sort I have been calling for above), as is his fellow-Romanticist Alan Liu in “Local Transcendence: Cultural Criticism, Postmodernism, and the Romanticism of Detail,” a seminal essay which ought to be required reading for anyone working in a field of cultural study, including media studies. Liu makes the telling point that recent critical-historical modes, from Foucauldian genealogy to cultural anthropology and the literary New Historicism, all thrive on an unexamined rhetoric that consecrates what he terms the “virtuosity of the detail” (80), a rhetoric which Liu is then able to convincingly align with the most familiar tenets of Romantic “local” transcendence, such that: “insignificance becomes the trope of transcendent meaning” (93).

Liu’s critique is too complex and finely-developed to go into here any further, but it underscores a fundamental crisis in new media studies today: the field, having really flourished only since the early nineties, has on the one hand not yet had occasion to undertake the kind of detailed case histories I advocate above; yet case studies (their “dated specificity”) are, on the other hand, already themselves being historicized as of a particular institutional moment. There is, for example, something to be learned from the curious genealogy of the font family known - fateful name - as Localizer (see FontFont). Released in 1996, the Localizer font mimics late-seventies LCD technology in an era when state-of-the-art digital typesetting permits perfect anti-aliasing. (Localizer is of course a classic remediation. Its design notes read in part: “we thought this would be the future, then it wasn’t, but it didn’t matter after all, so here it is.”) Layers and layers of media history are perhaps held in delicate high-res suspension among such exteriorities of accidents. Yet at present, new media studies apparently lacks the deep historical self-reflexivity necessary to undertake a genealogy of the Localizer font that would not also appear naive in the face of a critique such as Liu’s.

All of this is not to be taken as a criticism of Remediation itself, for Bolter and Grusin would surely (and fairly) object that a book engaging the particular issues I have been raising here was simply not the book they set out to write. Nonetheless, the probable success of a book such as Remediation only intensifies the realization that new media studies now faces disciplinary challenges that go far beyond building a critical vocabulary and syntax. I will go on record as saying that in order for new media studies to move beyond its current 1.0 generation of scholarly discourse - a discourse which is still largely, though not exclusively, descriptive and explanatory (all those glossaries!) - the field must make a broad-based commitment to serious archival research. Of course the archive is more likely to be found at venues such as Xerox PARC or IBM or Microsoft or Apple - or in a Palo Alto garage - than at the library and rare book room. But case studies of specific hardware and software implementations, and of the micro-events in the commercial and institutional environments in which those implementations are developed and deployed are absolutely essential if we are to begin achieving deeper understandings of the impact of new media on the culture at large. (An example of one such “micro-event”: March 31, 1998. Netscape Communications Corporation posts the source code for its 5.0 generation of browsers on its public Web site in an attempt to recapture market-share from Microsoft. This, I submit, is the real stuff of which new media history is being made.) Those case studies can - should - be theoretically informed, building on the groundwork of a book such as Remediation.

There is no task more important for new media studies than demystifing the unequivocally material processes of development now at work in the high-tech industry. Doing that work, and doing it right, will take time - archive time, not Internet Time.

>>—> Jan Baetens responds.


works cited

Bolter, Jay David and Richard Grusin. Remediation: Understanding New Media. Cambridge, MA: MIT Press, 1998. [Note: All citations in this review are from a pre-press review copy of Remediation, provided by the MIT Press.]

Chandler, James. England in 1819: The Politics of Literary Culture and the Case of Romantic Historicism. Chicago: University of Chicago Press, 1998.

Cusumano, Michael A. and David B. Yoffie. Competing on Internet Time: Lessons from Netscapes and Its Battle with Microsoft. New York: The Free Press, 1998.

Foucault, Michel. ” Nietzsche, Genealogy, History.” The Foucault Reader. Ed. Paul Rabinow. New York: Pantheon Books, 1984. 76-100.

Hafner, Katie and Matthew Lyon. Where Wizards Stay Up Late: The Origins of the Internet. New York: Simon and Schuster, 1996.

Hayles, N. Katherine. How We Became Post-Human: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press, 1999.

Kittler, Freidrich. Discourse Networks 1800/1900. Trans. Michael Metteer. Stanford, CA: Stanford University Press, 1990.

Liu, Alan. “Local Transcendence: Cultural Criticism, Postmodernism, and the Romanticism of Detail.” Representation 32 (Fall 1990): 75-113.

McLuhan, H. Marshall. Understanding Media: The Extensions of Man. Cambridge, MA: MIT Press, 1964, 1994.

Stone, Allucquere Rosanne. The War of Desire and Technology at the Close of the Mechanical Age. Cambridge, MA: The MIT Press, 1995.

          ÐŸÐµÐ½ÑÐ¸Ð¾Ð½Ð½Ñ‹Ð¹ фонд внедряет решение SAP для анализа «больших данных»        
В 2012-2013 г. Пенсионный фонд России вложил в развитие и поддержку ИТ-системы на базе SAP NetWeaver около 120 млн руб. В следующем году ведомство намерено внедрить новейшее решение немецкого вендора для анализа «Ð±Ð¾Ð»ÑŒÑˆÐ¸Ñ… данных» - HANA.

Для сокращения времени получения консолидированной отчетности и быстрого внесения корректировок в статьи бюджета Пенсионный фонд начал внедрение аналитической платформы SAP HANA.

HANA (High-performance Analytic Appliance) представляет собой средство анализа больших объемов данных. Оно состоит из аппаратной части и программного обеспечения SAP, поставляемых в виде предконфигурированного комплекса. Высокоскоростная обработка запросов в HANA выполняется благодаря входящей в комплекс собственной базе данных, полностью работающей в оперативной памяти (in-memory).

В марте 2011 г. SAP и IBM представляли первые официальные результаты тестирования производительности решения HANA. Система справлялась с 10 тыс. запросов в час, обрабатывала за это время 1,3 ТБ данных и выдавала результаты в течение нескольких секунд.

Внедрение SAP HANA в ПФР ведется в рамках модернизации автоматизированной системы учета и контроля расходов под названием «Ð‘ÑŽÐ´Ð¶ÐµÑ‚Ð¸Ñ€Ð¾Ð²Ð°Ð½Ð¸Ðµ», которая уже работает в 82 отделениях ПФР по всей России.

Аналитическая платформа призвана повысить точность данных, сократить время их обработки и увеличить общую производительность работы ПФР. Развернуть ландшафт HANA в пилотной зоне планируется в 2014 г.

«ÐžÑÐ½Ð¾Ð²Ð½Ð°Ñ задача, решаемая в ходе внедрения и развития системы бюджетирования, заключается в повышении качества бюджетных данных и ускорении процессов консолидации, согласования и внесения корректировок в бюджет расходов ПФР», — сообщили CNews в ПФР.

Работа АС «Ð‘юджетирование» организована на основе SAP NetWeaver. На создание и развитие данной системы ПФР потратил около 120 млн руб. в течение 2012-2013 гг.


Pengertian Middleware
Middleware adalah software penghubung yang berisi sekumpulan layanan yang memungkinkan beberapa proses dapat berjalan pada satu atau lebih mesin untuk saling berinteraksi pada suatu jaringan. Middleware sangat dibutuhkan untuk bermigrasi dari aplikasi mainframe ke aplikasi client/server dan juga untuk menyediakan komunikasi antar platform yang berbeda. Dalam dunia teknologi informasi, terminologi middleware merupakan istilah umum dalam pemrograman komputer yang digunakan untuk menyatukan, sebagai penghubung, ataupun untuk meningkatkan fungsi dari dua buah program/aplikasi yang telah ada. Perangkat lunak middleware merupakan perangkat lunak yang terletak diantara program aplikasi dan pelayanan-pelayanan yang ada di sistem operasi.
Fungsi Middleware
Fungsi-fungsi dari middleware adalah sebagai berikut :
  • Menyediakan lingkungan pemrograman aplilasi sederhana yang menyembunyikan penggunaan secara detail pelayanan-pelayanan yang ada pada sistem operasi.
  • Menyediakan lingkungan pemrograman aplikasi yang umum yang mencakup berbagai komputer dan sistem operasi.
  • Mengisi kekurangan yang terdapat antara sistem operasi dengan aplikasi, seperti dalam hal: networking, security, database, user interface, dan system administration.
Perkembangan Middleware
Perkembangan middleware dari waktu ke waktu dapat dikatagorikan sebagai berikut:
1.On Line Transaction Processing (OLTP)
Merupakan perkembangan awal dari koneksi antar remote database. Pertama kali ditemukan tahun 1969 oleh seorang engineer di Ford, kemudian diadopsi oleh IBM hingga kini dikenal sebagai proses OLTP.
2.Remote Procedure Call (RPC)
Menyediakan fasilitas jaringan secara transparan. Open Network Computing (ONC) merupakan prototipe pertama yang diperkenalkan awal tahun 70-an. Sun unggul dalam hal ini dengan mengeluarkan suatu standar untuk koneksi ke internet. Distributed Computing Environment (DCE) yang dikeluarkan oleh Open Systems Foundation (OSF) menyediakan fungsi-fungsi ONC yang cukup kompleks dan tidak mudah untuk sistem administrasinya.

Layanan Middleware
Layanan Middleware merupakan sekumpulan software terdistribusi yang menempati lapisan antara aplikasi dan sistem operasi serta layanan jaringan di suatu node pada jaringan komputer. Menyediakan kumpulan fungsi API (Application Programming Interfaces) yang lebih tinggi daripada API yang disediakan sistem operasi dan layanan jaringan yang memungkinkan suatu aplikasi dapat :
  • Mengalokasikan suatu layanan secara transparan pada jaringan.
  • Menyediakan interaksi dengan aplikasi atau layanan lain.
  • Tidak tergantung dari layanan jaringan.
  • Handal dan mampu memberikan suatu layanan.
  • Diperluas (dikembangkan) kapasitasnya tanpa kehilangan fungsinya.
Tipe-tipe dari Layanan Middleware :
1.Layanan Sistem Terdistribusi
Komunikasinya bersifat kritis, program-to-program dan biasanya merupakan layanan manajemen data seperti: RPC, MOM (Message Oriented Middleware) dan ORB.
2.Layanan Application
Aksesnya ke layanan terdistribusi dan jaringan, seperti : TP (transaction processing) monitor dan layanan database, seperti Structured Query Language (SQL).
3.Layanan Manajemen Middleware
Memungkinkan aplikasi dan fungsi dimonitor secara terus menerus untuk menyakinkan unjuk kerja yang optimal pada lingkungan komputasi terdistribusi.
Contoh-contoh dari layanan middleware :
1.Transaction Monitor
Merupakan Produk pertama yang disebut middleware. Menempati posisi antara permintaan dari program client dengan database, untuk menyakinkan bahwa semua transaksi ke database terlayani dengan baik.
2.Messaging Middleware
Merupakan antarmuka dan transportasi antar aplikasi. Dapat menyimpan data dalam suatu antrian message jika mesin tujuan sedang mati atau overloaded. Berisi business logic yang merutekan message ke tujuan sebenarnya dan memformat ulang data lebih tepat. Sama seperti sistem messaging email, kecuali messaging middleware digunakan untuk mengirim data antar aplikasi.
3.Database Middleware
Middleware basisdata menyediakan interface antara sebuah query dengan beberapa database yang terdistribusi. Menggunakan, baik arsitektur hub and spoke atau arsitektur terdistribusi, sehingga memungkinkan data untuk digabungkan dari beberapa sumber data yang berbeda atau terpisah.
4.Middleware Application Server
Merupakan sebuah Web-based Application server, yang menyediakan antarmuka untuk berbagai aplikasi. Digunakan sebagai middleware antara browser dengan aplikasi.
Contoh Middleware
Berikut ini merupakan contoh-contoh perangkat lunak dari middleware :
  • Java¶s: Remote Procedure Call
  • Object Management Group¶s:
Common Object Request Broker Architecture (CORBA)
  • Microsoft¶s COM/DCOM (Component Object Model) : Also .NET Remoting
  • ActiveX controls (in-process COM components).
Sumber :

           IBM X-Force 2016 Cyber Security Intelligence Index : 2016-04-20 17:43:21 - Global Security Mag Online - IBM X-Force présente ce jour son 2016 Cyber Security Intelligence Index Ce rapport annuel revient sur l'année 2015 en s'appuyant sur les milliards de données opérationnelles et d'investigation, collectées par IBM Security Services, concernant des événements liés à la sécurité dans plus de 1000 entreprises à travers 100 pays Le rapport de cette année fournit un aperçu intéressant de la course aux armements qui se joue entre adversaires et défenseurs de la sécurité informatique Sans surprise, les pirates - Malwares
          eSignLive répond aux critères de résidence des données avec de nouveaux Data Centers localisés en Europe : 2016-04-13 10:47:23 - Global Security Mag Online - VASCO Data Security International annonce que eSignLive , sa solution de e-signature pour les professionnels, implémente sa solution de e-signatures au sein de data centers en Europe eSignLive devient ainsi la solution de e-signature proposant le plus de localisations cloud sur le marché eSignLive s'appuie sur le réseau mondial de data centers IBM Cloud pour lancer de nouvelles instances régionales et ainsi répondre aux exigences nationales de résidence des données et à la demande de plus en - Produits
          IBM et Box s'allient pour permettre le stockage des données localement en Europe et en Asie grâce à Box Zones et au Cloud IBM : 2016-04-12 18:28:27 - Global Security Mag Online - IBM annonce son intention d'étendre son partenariat mondial avec Box afin de permettre aux entreprises de choisir la région de stockage de leurs données en Europe et en Asie sur le Cloud IBM Ce dernier sera disponible dans le cadre des technologies Box, new Box Zones, qui permettront aux entreprises d'adopter Box et de stocker leurs données dans des régions sélectionnées IBM tirera parti des Box Zones pour faciliter le déploiement de cloud hybride avec ses solutions de gestion de contenu - Produits
          DocuSign et IBM établissent un partenariat stratégique global en matière de transformation et de confiance numérique : 2016-04-11 14:33:43 - Global Security Mag Online - Puisque la transformation numérique figure parmi les objectifs stratégiques prioritaires des responsables et décideurs en entreprise, DocuSign, Inc DocuSign annonce un partenariat stratégique global avec IBM en intégrant la signature électronique eSignature et la fonctionnalité DTM Digital Transaction Management aux logiciels et services IBM Le partenariat stratégique entre IBM et DocuSign voit donc les deux sociétés combiner leurs atouts pour aider les entreprises à gérer les transactions - Business
          Check Out The Dash Robot at Best Buy #TechToys        

This is a sponsored post for Best Buy. I couldn’t have been but nine-or-so years old when one of my mother’s day care parents bought our family an IBM computer for Christmas as a “thank you” for taking such wonderful care of their children. He was an IBMer, so he got a great deal on […]

The post Check Out The Dash Robot at Best Buy #TechToys appeared first on Crazy Adventures in Parenting.

Follow our new feed here ->


Related Stories


          Comment on Secure password storage for i5_connect connections. by Chris Hird        
We have just completed an update to our JobQGenie PHP interface we are building and decided to add the ability to use encrypted storage for the password as suggested above. Unfortunately the overhead of constantly decrypting the password to connect through the persistent connection is adding some significant delays to the returned page. The test is being run on a Laptop connecting via Easycom to the IBM i so the decryption is taking place in the PC, the significant delay can only be attributed to the time it takes to decrypt the password. So if speed is important encryption may be a bottleneck. We will add some time stamps to see the actual differences and post the results later. Chris..
          Comment on New version Zend Studio for IBM i by Chris Hird        
Kent Thanks you for the updated information. I did take a look at Adobe site and can see the option to upgrade to Flash Builder 4.5 for PHP, but it does not list Zend Studio as an upgrade option? Perhaps you could ask Adobe to add it? Also glad to see Zend Studio will continue to be enhanced and supported. Chris...
          Goodbye, Lotus 1-2-3        
"The first killer app was VisiCalc. This early spreadsheet turned the Apple II from a hobbyist toy to a business computer. VisiCalc came with room for improvement, though. In addition, a new architecture and operating system, the Intel-based IBM PC and MS-DOS, also needed a spreadsheet to be taken seriously. That spreadsheet, released in early 1983, would be Lotus 1-2-3, and it would change the world. It became the PC's killer app, and the world would never be the same. On May 14, IBM quietly announced the end of the road for 1-2-3, along with Lotus Organizer and the Lotus SmartSuite office suite. Lotus 1-2-3's day is done." Impressive 30 year run.
          Hardware – IBM and Sony develop high capacity 330 terabyte tape cartridge        
          IBM stellt kognitive Plattform für IT-Services vor        
Die Technologie von IBM Watson soll IT-Infrastrukturen schneller, sicherer und effizienter gestalten. Dank Artificial Intelligence steigt der Umfang der Automatisierung von Services. IBM hat die erste Watson-basierte Services-Plattform vorgestellt, die über die IBM Cloud verfügbar ist. Die kognitiven Fähigkeiten dieser Plattform sind laut Unternehmensaussage die Voraussetzung für smarte, stärker automatisierte IT-Prozesse. Die Plattform arbeite mit Artificial-Intelligence, um potenzielle Probleme und Weiter lesen
          Filenet Systems Engineer (DOE L Clearance Required)        
NM-Albuquerque, Job Description: Contractor must have applied for DOE L Security Clearance before starting. Standard Hours: 9am – 3pm are core hours. Beyond that, there is flexibility on start and stop time The client is looking for a candidate for maintenance of the IBM Filenet software suite. Candidate will be responsible for troubleshooting, managing content, and upgrading the system. IBM ECM components curren
          Eclipse Celebrates 10 Years of Innovation        
In the month of November, the Eclipse community is celebrating 10 years since the start of the Eclipse open source project. In November 2001, the Eclipse IDE and platform were first made available under an open source software license. IBM made the initial $40 million contribution of technology to start the Eclipse project that has now grown to technology commons with an estimated value of over $800 million. The Eclipse community has also emerged as the leading place for individuals and organizations to collaborate on innovative technology development.
          EclipseCon Keynote: What Is Watson?        
The Eclipse Foundation is pleased to announce the IBM Watson project is coming to EclipseCon 2011. 'What is Watson' will be a keynote presentation featuring the technologies that power Watson.
          Last chance to register for Eclipse Summit Europe! Sign up by October 7.        
Eclipse Summit Europe features over 60 technical talks and keynotes from Erich Gamma of IBM and Joerg Sievert of SAP.
          Gold Sponsors Announced for EclipseCon 2007        
The Eclipse Foundation today announced that Actuate, BEA Systems, Business Objects, IBM and Klocwork have signed on as Gold Sponsors for EclipseCon 2007, the annual conference that brings together the Eclipse community. Gold Sponsorship is the highest level of sponsorship for the conference. The third annual EclipseCon will be held at the Santa Clara Convention Center, Santa Clara, Calif., March 5-8, 2007.
          IBM Donates Translations for Eclipse 3.2.1        
IBM is pleased to contribute translations for the Eclipse Project, the Eclipse Web Tools Platform (WTP) Project, the Eclipse Test and Performance Tools Platform (TPTP) Project, the Business Intelligence and Reporting Tools (BIRT) Project, the Eclipse Modeling Project, the Eclipse Data Tools Platform (DTP) Project and for several subprojects of the Eclipse Tools Project for the Callisto releases.
          Upcoming Eclipse RCP Webinars featuring 'Building a Great GUI' and 'Deploying RCP Applications'        
Two new RCP webinars have been scheduled: 1) Building a Great GUI using Eclipse RCP is scheduled for November 21, and 2) Packaging and Deploying Applications based on Eclipse RCP is scheduled for December 14. These webinars are sponsored by Palamida, Instantiations, IBM and Eclipse Foundation.
          New Top-Level Eclipse Project Encourages Adoption of Open Standards for Model-Based Development        
The Eclipse Foundation today announced the creation of a new top-level project, called the Eclipse Modeling Project. The new project will focus on the evolution and promotion of model-based development technologies within the Eclipse community. Richard Gronback of Borland Software and Ed Merks of IBM were selected as project co-leaders, and are teaming to continue to advance Eclipse modeling technologies and to drive adoption of open standards related to software modeling.
          Eclipse Foundation, Zend Technologies, and IBM Announce the Approval of the PHP IDE Project        
The Eclipse Foundation, an open source community committed to the implementation of a universal software development platform, Zend Technologies and IBM, today announced that the Eclipse PHP IDE project has been approved by the Eclipse Foundation. The technology project was proposed by Zend and IBM on October 21, 2005. It will deliver a PHP Integrated Development Environment framework for the Eclipse Platform and will encompass the development components necessary to develop PHP-based web applications and will leverage the existing Eclipse Web Tools Project.
          NetSeminar December 8th: A Conversation with Erich Gamma        
Thursday, December 8, 2005, 11:00 am PT/2:00 pm ET, Duration: 60 minutes

For nearly two decades, Erich Gamma has been at the forefront of innovation in software development. Perhaps best known as the co-author of the landmark book "Design Patterns: Elements of Reusable Object-Oriented Software", Dr. Gamma is also the developer, with Kent Beck, of JUnit, the de facto standard in terms of unit testing tools for Java. He leads the Eclipse Java Development tools project and is a member of the Eclipse and the Eclipse Tools project management committees. Dr. Gamma is currently a Distinguished Engineer working for IBM Rational Software in Zurich, Switzerland.

In this wide-ranging interview, Dr. Gamma sits down with Jonathan Erickson, Editorial Director of CMP Media's Software Development Group, to discuss topics such as:

- Learn how to balance cost reduction against its impact on customer loyalty.
- What was IBM's motivation in initially launching the Eclipse project?
- Where will Eclipse go from here?
- What are the challenges--and benefits--of Open Source software development?
And more!
          TPTP Selected as Finalist for 2005 Testers Choice Award        
The Eclipse Test & Performance Tools Platform (TPTP) Project was selected as a finalist for the 2005 Testers Choice award in the "Java Test & Performance" category. Also, a TPTP-based tool, IBM Rational Performance Tester, was selected as a finalist in the "Load/Performance Test Tools" category.
          Make in India and IT industry : software products, services and future of india        
We know sea contains tremendous opportunities and resources, just like sea, we can determine possibilities in software industries. Emerging verticals (retail, healthcare, utilities) and increasing dependency on information technology are producing threshold. IT sector is majorly driven by products and services where services cover major share like data processing, consultancy services, software supply services, business and management consultancy services, market research services, technical testing and analysis services. This industry can transform any countries economy and can generate millions of employment.

With emergence of software industry India got special recognition in the world. The Indian manufacturing sector has the highest IT spending followed by automotive, chemicals and consumer products industries. The IT-BPM sector constitutes 8.1% of the country’s GDP. 60% of firms use India for testing services, IT-BPM revenues are expected to reach USD 118 Billion in 2014. The IT industry has more than 15,000 firms delivering 3.1 Million jobs; of which 1000+ are large firms.

According to PricewaterhouseCoopers (PwC) report In terms of software revenue among emerging markets China topped the list with $2,738 million, India was ranked 5th among the emerging markets in 2011. The Indian IT industry has been primarily identified with software services but lacking in product market, According to the industry body NASSCOM, the revenue from the software product segment currently stands at $2.2 billion out of $100 billon. According to one estimate, Apple earns $368 out of every $560 iPhone. In contrast, Foxcon's margin on every iPhone that it manufactures for Apple is less than $15. IT products market is dominated by global players like Microsoft, Oracle, SAP, IBM etc. Now they are also moving towards providing services, which is quit alarming.

SMAC (social, mobility, analytics, cloud) market will become next trajectory for any industry player in upcoming decade. India need to retain success in this sector as it evolved on global level and can get higher position due to large pool of skilled engineers and growing companies. Traditional business strongholds will make way for new geographies, there would be new customers and more and more of SMEs will go for IT application and services.
“We have companies that have shown they have what it takes to be global leaders – InMobi challenges Google in mobile advertising, and Fusion Charts is a preferred source for visualization tools.” Professor Rishikesha T. Krishnan

India need such type of companies in India because with the increase in number of user of software product, new technologies like cloud computing there will be new challenges and changing business needs. Indian government make in India Campaign can provide software industry a right direction, which will also focus on product with services.

The Government of India is actively providing fiscal incentives and liberalizing norms for FDI and raising capital abroad. There are also other initiatives by government that inspires Indian companies for product startups like Indian Software Product Industry Round Table (ispirt). National Policy on Information Technology 2012 aims to increase revenues of IT and BPM industry to USD 300 Billion by 2020 and expand exports to USD 200 Billion by 2020.

There are more possibility to explore in software industry, it’s time to take lions step towards make in India to made in India journey.

          Idea Festival: Unlocking secrets to ‘deep innovation’        
IBM had Think Pads long before the invention of the laptop computer that now goes by that name. The original Think Pads were leather-bound pads of paper that IBM employees were given by management, beginning in 1923, to record their ideas and inspirations. David Barnes, an IBM executive whose official title is “technology evangelist”, used […]
          Delivery Associate Partner - IBM - Canada        
We are seeking an Associate Partner who will contribute significantly to the aggressive growth objectives of the team. IBM Global Business Services:....
From IBM - Mon, 17 Jul 2017 21:06:15 GMT - View all Canada jobs
          Associate Partner – Public Sector (Ottawa or Toronto location) - IBM - Canada        
We are seeking an Associate Partner who will contribute significantly to the aggressive growth objectives of the team. IBM Global Business Services:....
From IBM - Thu, 06 Jul 2017 15:23:55 GMT - View all Canada jobs
          Security meets Workflow: iQ.Suite kombiniert E-Mail-Management und Workflowmanagement        
GROUP Business Software (GBS) hat seine E-Mail-Management Lösung iQ.Suite aktualisiert. Version 19.1 für IBM Domino/Verse sowie 15.1 für Microsoft Exchange/SMTP und Office 365 bieten zahlreiche Neuerungen im Bereich E-Mail-Sicherheit und erlauben erstmals die nahtlose Integration von E-Mail-Inhalten in Geschäftsprozesse.
          Sejarah dan Perkembangan Pascal        

Turbo Pascal
Turbo Pascal adalah sebuah sistem pengembangan perangkat lunak yang terdiri atas kompiler dan lingkungan pengembangan terintegrasi (dalam bahasa inggris: Integrated Development Environment - IDE) atas bahasa pemrograman pascal untuk sistem operasi CP/MCP/M-86, dan MS-DOS, yang dikembangkan oleh Borland pada masa kepemimpinan Philippe Kahn. Nama Borland Pascal umumnya digunakan untuk paket perangkat lunak tingkat lanjut (dengan kepustakaan yang lebih banyak dan pustaka kode sumber standar) sementara versi yang lebih murah dan paling luas digunakan dinamakan sebagai Turbo Pascal. Nama Borland Pascal juga digunakan sebagai dialek spesifik Pascal buatan Borland.Borland telah menembangkan tiga versi lama dari Turbo Pascal secara gratis disebabkan karena sejarahnya yang panjang khusus untuk versi 1.0, 3.02, dan 5.5 yang berjalan pada sistem operasi MS-DOS.

Turbo Pascal pada awalnya adalah kompiler Blue Label Pascal yang dibuat untuk sistem operasi komputer mikro berbasis kaset, NasSys, milik Nascom tahun 1981 oleh Anders Hejlsberg. Kompiler tersebut ditulis ulang untuk CP/M dan dinamai Compas Pascal, dan kemudian dinamai Turbo Pascal untuk sistem operasi MS-DOS dan CP/M. Versi Turbo Pascal untuk komputer Apple Macintosh sebenarnya pernah ditembangkan tahun 1986, namun pengembangannya dihentikan sekitar tahun 1992. Versi-versi lain pernah tersedia pula untuk mesin-mesin CP/M seperti DEC Rainbow dalam beberapa penembangan.

Versi Dos
Kompiler Turbo Pascal didasari pada kompiler Bule Laber Pascal secara resmi dihasilkan untuk NasSys cassette-based operating system dari Nascom mikrokomputer pada 1981 oleh Anders Hejlsberg. Borland melisensikan inti kompiler "PolyPascal" milik Hejlsberg (Poly Data adalah nama dari perusahaan Hejlsberg di Denmark), dan menambahkan tampilan muka dan editor. Anders Hejlsberg bergabung dengan perusahaan sebagai karyawan dan arsitek untuk seluruh versi dari kompiler Turbo Pascal dan satu dari tiga versi dari Borland Delphi. Kompiler pertama sekali dirilis sebagai Compas Pascal untuk CP/M, dan kemudian dirilis pada 20 November 1983 sebagai Turbo Pascal untuk CP/M, CP/M-86 {contoh komputer Apple II yang dipasangkan dengan Z-80 SoftCard) dan mesin DOS. Pada debut pertama a di pasar Amerika, Turbo Pascal dijual 49.99 USD. Kompiler Pascal yang terintegerasikan memiliki kualitas yang sangat bagus dibandingkan dengan produksi Pascal yang lain pada saat itu dan juga sangat diterima dikalangan umum.
Nama Turbo diambil karena berhubungan dengan kecepatan kompuladi dari produksi exekute. Siklus edit/compile/run lebih cepat dibandingkan implementasi dari Pascal yang lain karena semuanya berhubungan dengan membangun program yang disimpan di RAM, dan karena ini merupakan kompiler sekali yang ditulis pada bahasa Assembly. Kompilasi terjadi dengan sangat cepat dibandingkan dengan bahasa yang lain (bahkan kompiler Borland untuk bahasa C), dan waktu programmer juga menjadi hemat sejak program dapat dikompile dan dijalankan dari IDE. Kecepatan file execute COM adalah terobosan untuk developer yang hanya memiliki satu pengalaman dalam mikrokomputer program seperti BASIC.
Bill Gates melihat kesuksesan dari Turbo Pascal pada kehidupan pribadi dan tidak dapat mengerti kenapa produk Microsoft sangat lambat. Dia melampiaskan kemarahannya pada Greg Whitten[direktur programming di Microsoft Languages] dan berteriak padanya selama satu jam. Dia tidak dapat mengerti kenapa Kahn dapat mengalahkan kompetitor kuat seperti Microsoft.
IDE masih sangat canggih sampai sekarang, ketika sumber daya komputer pada PC IBM sangat terbatas (desain IBM memiliki keterbatasan yang besar sehingga performanya tidak dapat melawan produk enterprise yang menguntungkan IBM). IDE sangat simple dan intuitif, dan menu sistem yang sangat terorganisir dengan baik. Versi terbaru dari penggunaan editor; Wordstar key functions, menjadi standar pada saat ini. Versi selanjutnya dari IDE, didesain untuk PC dengan mengecilkan ruang pada disk dan memory, yang dapat menampilkan definisi dari kata kunci dari bahasa dengan menempatkan kursor pada kata kunci dan menekan F1. Definisi juga menampilkan kode contoh. Ini memberikan pelajaran kepada programmer yang belum handal dengan menggunakan IDE, tanpa membutuhkan pertolongan dari buku.
Versi 1 hingga versi 3
Borland membeli lisensi atas kompiler PolyPascal yang ditulis oleh Anders Hejlsberg (Poly Data adalah nama perusahaan yang didirikannya diDenmark), dan menambahkan antar muka pengguna serta editor. Anders kemudian bergabung sebagai karyawan dan menjadi arsitek atas semua versi kompiler Turbo Pascal dan tiga versi pertama Borland Delphi.
Versi pertama dari Turbo Pascal, yang kemudian disebut sebagai versi 1, memiliki unjuk kerja yang sangat cepat dibandingkan kompiler pascal untuk komputer mikro lainnya. Kompiler tersebut tersedia untuk sistem operasi CP/M, CP/M-86, dan MS-DOS, dan penggunaannya sangat luas pada masa itu. Versi Turbo Pascal untuk CP/M saat itu bisa digunakan pada komputer Apple II yang sangat populer jika digunakan dengan sebuah Z-80 SoftCard, produk perangkat keras pertama yang ditembangkan microsoft di tahun 1980.
Pada saat itu CP/M menggunakan format berkas executable yang sederhana dengan menggunakan ekstensi .COM; sistem operasi MS-DOS bisa menggunakan baik .COM (tidak kompatibel dengan format yang terdapat pada CP/M) maupun format .EXE. Turbo Pascal pada saat itu hanya mendukung kode biner berformat .COM, pada masa itu hal tersebut tidak menjadi suatu bentuk keterbatasan. Perangkat lunak Turbo Pascal itu sendiri merupakan sebuah berkas berekstensi .COM dan berukuran sekitar 28 kilobita, termasuk editor, kompiler, dan linker, dan rutin-rutin pustaka. Efisiensii proses edit/kompilasi/jalankan lebih cepat dibandingkan dengan implementasi paskal pada kompiler lainnya disebabkan semua elemen yang terkait dalam pengembangan program diletakkan pada memori komputer (RAM), dan karena kompilernya sendiri merupakan kompiler berjenis single-pass compiler yang ditulis dengan bahasa assembler. Unjuk kerja proses kompilasi sangat cepat dibandingkan dengan produk lain (bahkan dibandingkan dengan kompiler C milik Borland sendiri).
Ketika pertama kali versi Turbo Pascal muncul pada tanggal 20 November 1983, jenis IDE yang digunakannya masih terbilang baru. Pada debutnya terhadap pasar perangkat lunak di Amerika, perangkat lunak tersebut dibandrol dengan harga USD$49.99. Kualitas kompiler pascal terintegrasi terdapat dalam Turbo Pascal sangat baik dibandingkan kompetitor lain dan atas fitur-fitur tersebut ditawarkan dengan harga yang terjangkau.
Versi 2 dan 3 merupakan pengembangan lebih lanjut dari versi sebelumnya, mampu berkerja dalam memori, dan menghasilkan berkas biner berekstensi .COM/.CMD. Dukungan atas sistem operasi CP/M dan CP/M-86 dihentikan setelah versi 3.
Bahasa Assembly
Saat seluruh versi dari Turbo Pascal dapat memasukkan kode mesin, versi selanjutnya memberikan kemampuan untuk berintegrasi dengan mudah pada bahasa Assembly tanpa pascal. Dukungan untuk 8086 mode memory disediakan dengan inline assembly, pilihan kompiler, dan ekstensi bahasa seperti kata kunci yang "pasti".
Versi-versi Lanjutan
Versi 4, ditembangkan tahun 1987, merupakan perangkat lunak yang ditulis ulang untuk keseluruhan sistem. Kompiler menghasilkan berkas biner berekstensi .EXE pada MS-DOS, dan tidak lagi .COM. Sistem operasi CP/M dan C/M-86 tidak lagi didukung pada versi kompiler ini. Versi ini pula memperkenalkan sebuah antar muka berlayar penuh dengan yang dilengkapi dengan menu tarik; versi-versi awal memilik layar menu berbasis teks; dan editor berlayar-penuh. Microsoft Windows belum ada saat versi ini ditembangkan, dan bahkan pemanfaatan tetikus-pun masih jarang.
Versi 5.x diperkenalkan dengan layar biru yang kemudian menjadi ciri khas yang sangat familiar, yang kemudian menjadi merek dagang perusahaan perangkat kompiler MS-DOS sampai era DOS berakhir di pertengahan tahun 1990-an.
Versi terakhir yang pernah ditembangkan adalah versi 7. Borland Pascal 7 terdiri atas sebuah IDE, dan kompiler untuk MS-DOS, DOS terekstensi, dan program Windows 3.x. Turbo Pascal 7 di sisi lain hanya bisa membuat program MS-DOS standar. Perangkat lunak tersebut dilengkapi pula dengan pustaka grafis yang mengabstraksi pemrograman dalam menggunakan beberapa driver grafis eksternal, namun unjuk kerja pustaka ini tidak memuaskan.
Bahasa Perakit
Meski semua versi Turbo Pascal mendukung inline machine code kode mesin yang disertakan dalam baris kode sumber bahasa pascal, versi-versi berikutnya mendukung integrasi bahasa perakit (Assembly) dengan Pascal secara mudah. Hal ini memungkinkan pemrogram untuk meningkatkan unjuk kerja eksekusi program lebih lanjut, selain itu, memungkinkan pemrogram untuk mengakses perangkat keras secara langsung.
Dukungan atas model memori atas prosesor 8086 tersedia melalui inline assembly, opsi kompiler, dan eksensi bahasa seperti menggunakan kata kuci "absolute".
Pada tahun 1995 Borland menghentikan pengembangan Turbo Pascal dan menggantinya dengan Delphi, berbasis Object Pascal (bahasa Pascal yang telah dilengkapi dengan fitur pemrograman berorientasi obyek). Perangkat lunak tersebut membawa banyak konsep baru kepada pengguna Turbo Pascal seperti konsep pemrograman berbasis RAD (singkatan dari rapid application development). Meski demikan, versi 32 bit Delphi masih mendukung cukup banyak aspek-aspek yang ada pada Turbo Pascal.
Beberapa produk lain yang kompatibel dengan Turbo Pascal juga bermunculan seperti Free Pascal dan Virtual Pascal.
Borland Pascal masih digunakan sebagai materi yang dipelajari sebagai mata pelajaran atau mata kuliah di beberapa sekolah, dan universitas di Jerman, dan Amerika. Di Beligia, Romania, Serbia, Moldova dan Bulgaria Pascal digunakan bahkan di sekolah menengah tingkat pertama. Namun di Afrika selatan, Pascal tidak lagi digunakan, melainkan menggunakan Delphi dan Java.
Beberapa kalangan guru lebih memilih Borland Pascal 7, atau Turbo Pascal 5.5 disebabkan karena lebih sederhana dibandingkan IDE modern yang ada saat ini (seperti Visual Studio, atau Borland JBuilder), dengan demikian meraka dapat memfokuskan pengajaran lebih banyak pada sisi bahasa, dan bukan pada bagaimana cara mengoperasikan IDE tersebut. Selain dari itu, perangkat lunak tersebut tersedia secara gratis dan bisa diunduh dari situs resminya.
Contoh kode
·         Bahasa Pascal tidak bersifat case sensitive.
·         Secara historis, komentar dalam bahasa pascal diidentifikasikan sebagai { seperti ini }, atau (* seperti ini *), dan bisa terdiri atas beberapa baris. Versi lanjutan Borland Pascal juga mendukung model komentar seperti yang terdapat pada C++. // seperti ini , yang berlaku pada satu baris.
·         Sintaksis case yang lebih fleksibel daripada Pascal standar.
·         Himpunan hanya bisa memiliki hingga 28 (256) anggota.
·         Standar, string dengan panjang yang tetap didukung, namun terdapat pula tipe data String yang lebih fleksibel.
Berikut ini adalah contoh klasik program Halo dunia dengan Turbo Pascal :
  WriteLn('Halo dunia');
Dan berikut ini contoh program yang meminta masukan nama dan menuliskannya kembali di layar sebanyak seratus kali:
program TulisNama;
BIOS (Basic Input Output System)
BIOS, singkatan dari Basic Input Output System, dalam sistem komputer IBM PC atau kompatibelnya (komputer yang berbasis keluarga prosesor Intel x86) merujuk kepada kumpulan rutin perangkat lunak yang mampu melakukan hal-hal berikut:
  1. Inisialisasi (penyalaan) serta pengujian terhadap perangkat keras (dalam proses yang disebut dengan Power On Self Test, POST)
  2. Memuat dan menjalankan sistem operasi
  3. Mengatur beberapa konfigurasi dasar dalam komputer (tanggal, waktu, konfigurasi media penyimpanan, konfigurasi proses booting, kinerja, serta kestabilan komputer)
  4. Membantu sistem operasi dan aplikasi dalam proses pengaturan perangkat keras dengan menggunakan BIOS Runtime Services.
BIOS menyediakan antarmuka komunikasi tingkat rendah, dan dapat mengendalikan banyak jenis perangkat keras (seperti keyboard). Karena kedekatannya dengan perangkat keras, BIOS umumnya dibuat dengan menggunakan bahasa rakitan (assembly) yang digunakan oleh mesin yang bersangkutan.
Fungsi utama BIOS adalah untuk memberikan instruksi untuk Power-on self test (POST). Tes untuk memastikan bahwa komputer memiliki semua bagian yang diperlukan dan fungsi yang dibutuhkan untuk mulai beroperasi adalah baik, seperti penggunaan memori, keyboard dan bagian lainnya. Jika ada kesalahan yang terdeteksi pada saat tes, maka BIOS memerintahkan komputer untuk memberikan kode yang mengungkapkan masalah tersebut. Kode Kesalahan biasanya serangkaian beep terdengar lama setelah startup.
BIOS juga bekerja untuk memberikan komputer informasi dasar tentang bagaimana berinteraksi dengan beberapa komponen penting, seperti drive dan memori, yang akan memuat sistem operasi. Setelah petunjuk dasar telah dimuat dan self-test telah sukses, komputer dapat melanjutkan dengan memuat sistem operasi dari salah satu drive terpasang.
Istilah BIOS pertama kali muncul dalam sistem operasi CP/M, yang merupakan bagian dari CP/M yang dimuat pada saat proses booting dimulai yang berhadapan secara langsung dengan perangkat keras (beberapa mesin yang menjalankan CP/M memiliki boot loader sederhana dalam ROM). Kebanyakan versi DOS memiliki sebuah berkas yang disebut "IBMBIO.COM" (IBM PC-DOS) atau "IO.SYS" (MS-DOS) yang berfungsi sama seperti halnya CP/M disk BIOS.
Kata BIOS juga dapat diartikan sebagai "kehidupan" dalam tulisan Yunani (Βίος).

Komponen BIOS
Dalam BIOS, terdapat beberapa komponen dasar, yakni sebagai berikut:
Contoh dari CMOS Setup (Phoenix BIOS)
  • Program BIOS Setup yang memungkinkan pengguna untuk mengubah konfigurasi komputer (tipe harddisk, disk drive, manajemen daya listrik, kinerja komputer, dll) sesuai keinginan. BIOS menyembunyikan detail-detail cara pengaksesan perangkat keras yang cukup rumit apabila dilakukan secara langsung.
  • Driver untuk perangkat-perangkat keras dasar, seperti video adapterperangkat inputprosesor, dan beberapa perangkat lainnya untuksistem operasi dasar 16-bit (dalam hal ini adalah keluarga DOS).
  • Program bootstraper utama yang memungkinkan komputer dapat melakukan proses booting ke dalam sistem operasi yang terpasang.

Masa depan BIOS
BIOS telah lama digunakan dalam industri PC, yakni semenjak IBM PC dirilis pada tanggal 21 Agustus 1981. Karena BIOS masih berjalan pada modus real (real-mode) yang lambat, maka para desainer PC bersepakat untuk mengganti BIOS dengan yang lebih baik dari BIOS yaitu EFI (Extensible Firmware Interface) yang diturunkan dari arsitektur IA-64 (Itanium).

          The Natural Side of A.I.        
IBM CEO Ginni Rometty on the biggest misconception about intelligent machines

          To OO or not to OO? What is the question?        
Marc Funaro kicked off quite a heated debate on his blog lately by raging against people pushing object-oriented programing/design and how his attempt to follow their advice nearly led to the collapse of his business. Marc was expressing a common frustration that many of us have heard from people who try to learn OO, especially from people with a long history of procedural programming and/or no computer science background. I've left comments on a few of the blog posts but several people have asked me to go into a bit more depth about my thoughts on this issue (since I'm one of the people sometimes accused of "pushing" OO and insisting it's the "right" way to do things).First off, I'll repeat what I've said many times: there is no One True Way. What works for me might not work for you and even I will solve the same problem different ways at different times in different circumstances. Having said that, for me there is almost no problem today where a purely procedural solution is the right one for me. Years ago, that wasn't the case. Prior to 1992, I hadn't worked with any OO languages but I hadn't done just procedural programming either: I had started learning functional programming techniques around 1982. As part of my university degree, I did a one year work experience assignment at an insurance company, mostly writing IBM 8100 assembler (and some COBOL). I helped design and implement a number of powerful library routines for a hierarchical database system the team was developing, recursive routines that I now recognize as implementing the Visitor design pattern (which I hadn't heard of at the time). What I'd learned from high-level functional programming changed how I wrote the most procedural language available - assembler. In 1992, I started to pick up C++. I picked up Smalltalk in the mid-90's. By 1997, I'd discovered Java. Over the years, I got a little better at writing OO code and started learning about design patterns and OO design. It was a process, a journey, still ongoing. Along the way there were a few lightbulb moments and there was plenty of frustration. Surprised? Knowing what you know of me from my blog and my posts on mailing lists and my talks at conferences, would you expect that I have been through a lot of frustration trying to become a better software developer, a better software architect? I'd be surprised at anyone who had not had such frustrations! Why do we go through this? Why do we push through to the other side to learn - and master - this stuff? IT is an interesting industry because not only does it change - like all other industries - but the pace of change is extremely fast. What we learned last year will be outdated next year, or the year after. If you look at job descriptions for ColdFusion developers, they've changed over the last few years. Most of them want experience with frameworks and CFCs now. As Matt Woodward said in comments on those blog posts, the debate is over - OO won. Years ago. No matter what you think about OO, the industry has moved on and OO is the norm. COBOL, the mainstay of procedural programming, adopted OO features in the early 90's and OO COBOL compilers were available by 1997 with an OO ANSI standard following in 2002. Ada was the first ANSI standardized OO language in 1995. C++ became an ANSI standard in 1998 but work first began on that language in 1979. Simula is considered the first object-oriented language and it appeared in 1967. Smalltalk appeared in the 70's, became a de facto standard in 1980 and an ANSI standard in 1998. All modern languages - all new languages - embrace objects from the start. Many modern languages go beyond OO and incorporate other advanced features - some of those languages would probably seem incomprehensible to traditional, procedural programmers. We shouldn't expect to understand everything but we should expect to continually learn new things so that we remain employable, relevant, interesting. One of the subjects that came up in the various blog conversations was Fusebox. Someone lamented that "even" Fusebox had become OO. Fusebox is a very important illustration for the CF community. Fusebox 4 was built as four large, procedural files and supported both procedural and OO styles of application (yes, a framework written in a procedural style supported OO - even Fusebox 3 supported OO application development!). Fusebox had become hard to maintain and enhance. In order to move forward, I rewrote the core files to be more maintainable - I rebuilt Fusebox as a set of collaborating objects but it was 100% backward compatible. The same procedural apps that ran on Fusebox 4 still ran on Fusebox 5. The same OO apps that ran on Fusebox 4 also still ran on Fusebox 5. Even tho' the core files changed from pure procedural (and unmaintainable) code to fully object-oriented (and, hopefully, more maintainable) code, the range of supported application styles did not change. So, how do I really feel about Marc's post? I sympathize. I know his frustration is real. His background is design and not computer science - of course he finds OO to be extremely hard. I have met only a handful of people who have a natural ability for OO thinking. Sorry, but for everyone else, this is hard shit! There are no short cuts. Some years ago, I gave a talk at CFUnited (I don't even remember which talk now) and a guy came up to me afterward and asked me about learning OO. I told him to go easy on himself and expect this to be a difficult path that would take him a long time to learn. He exploded in anger, accusing me of calling him too stupid to learn OO. For me, that perfectly sums up the unrealistic expectation many people have about learning OO. It is hard. It does take a good long while. Many people give up. Some people push through and eventually reap the benefits of the pain of learning this stuff - and of making lots of mistakes along the way. Don't beat yourself up if OO doesn't make sense. Don't kill yourself trying to master it. I don't "get" numbers but many people do (including my wife). I completely "get" abstract math (my wife does not). She "gets" OO but she can't program (programming languages are too fussy for her tastes). Don't kill your business attempting to achieve some ideal that doesn't work for you. Spend some personal time on it, sure, but don't lose too much sleep over it. Here are the original blog posts:
          Global BPO Business Analytics Market to Grow at a CAGR of 33.9% by 2021: Key Vendors are Accenture, Capgemini, Genpact, IBM & TCS        

Research and Markets LogoDUBLIN, August 10, 2017 /PRNewswire/ -- The "Global BPO Business Analytics Market 2017-2021" report has been added to Research and Markets' offering. The global BPO business analytics market is expected to grow at a CAGR of 33.97 % during the period 2017-2021. Global BPO...

          10 Teknologi Perintis Komputer Masa Kini        
Komputer canggih yang kita nikmati saat ini bukanlah hasil kerja satu orang saja. Peranti pintar itu merupakan hasil kerja gotong royong sejumlah ilmuwan, teknisi dan tentu saja vendor pengembangnya. Berikut 10 daftar teknologi komputer dari masa ke masa yang menghasilkan kecanggihan komputer masa kini. 1. IBM Roadrunner Mimpi tentang superkomputer tercepat memang sulit diwujudkan, tapi […]
          IBM zbudowało grafenowy nadajnik        
W 2010 roku naukowcom z IBM udało się skonstruować oparty na grafenie procesor taktowany częstotliwością 100 GHz. Jednak na razie, pomimo znaczących postępów w technologii produkcji grafenu, jeszcze daleko do tego, by zastąpił on krzem znajdujący się w naszych komputerach. … Czytaj dalej
          Reid P. Meyer        
First Name: 
Middle Name / Initial: 
Last Name: 
Adjunct Professor of Law
412-355-7659 (Office)
412-228-1474 (Mobile)

From PricewaterhouseCoopers LLP

Reid is an International Tax Manager with PricewaterhouseCoopers where he assists multi-national corporations in the Pittsburgh and Cleveland markets with addressing their international business needs in a tax-efficient manner. His practice focuses on cross-border structuring, U.S. international tax compliance, and value chain transformation. Reid began practice upon his graduation from the University of Pittsburgh School of Law in 2005.  While at Pitt Law, Reid was the Production Editor of the Pittsburgh Tax Review and remains involved with the organization. Prior to attending Pitt Law, Reid worked in the Accounting group of IBM in Armonk, NY. Reid is a 1998 graduate of the Pennsylvania State University where he majored in finance with an emphasis on accounting.



          IBM thinks it's ready to turn quantum computing into an actual business         

IBM is still in the throes of a major transition from physical hardware manufacturing to an almost total focus on knowledge based services. Artificial intelligence (Watson), and the tools to leverage that technology (massive & fast processing power) represent the key areas of focus in what is a new era for the company.

          DIVERSAS VAGAS        
Este é um trabalho voluntário. O conteúdo das mensagens é de inteira responsabilidade dos anunciantes. Há muitas oportunidades e desejamos
que você encontre a sua!


Enviado por: "Suzana Negrini" negrinipiresdecampos
Qui, 25 de Nov de 2010 12:56 pm

Empresa de terceirização da área contábil, fiscal e financeira, consultoria
tributária e auditoria. Criada em 2001, conta com mais de 50 clientes, 40
colaboradores e um faturamento anual de aproximadamente R$ 3 milhões, com
projeções de aumento da receita em 50% ao ano.



Atuar com toda a rotina contábil como: classificação, digitação de todas as
contas contábeis, conciliação, balanço, DRE, demonstrações contábeis em

Usuário sistema integrado com bons conhecimentos em Excel

Apurações de Impostos (ICMS, IPI, PIS,COFINS,IRPJ,CSSL,

Formação Técnica ou Superior em

HORÁRIO: De segunda a quinta das 8h às 18h e sexta das 8h às 17h.

SALÁRIO: R$ 2.000,00 – regime CLT




Suzana Negrini
Sócia Diretora Scelta RH
11 2772.3542

Vaga Ugente * Consultor de Telecom
Enviado por: "Milena Possidonio" milena.possidonio
Qui, 25 de Nov de 2010 12:57 pm

Bom Dia João Honório, como vai?

Mais uma vaga.

Consultor de Telecom

É uma empresa multinacional, no segmento de telecomunicações que busca um

Experiência em gestão de orçamento da gerência (Capex e Opex), alocação de
recursos, controle de metas, controlar o cumprimento de contratos, controle
de frota de recursos humanos, acompanhamento de empresas contratadas.

Requisitos desejáveis:
Visão ampla do mercado, orientação para resultados, bom relacionamento
interpessoal e capacidade de negociação.
Conhecimentos avançados em Excel, Power Point.

Esse profissional irá atuar na área administrativa, dando todo o suporte
necessário ás gerências de uma determinada diretoria, avaliando e controlando
seus orçamentos, contratos, recursos entre outros.

Superior completo em administração, economia, engenharia, ou áreas
administrativas de modo geral.
Minimo um ano de experiência com gestão de contratos, orçamentos e controles

Salário a combinar e todos os benefícios de uma multinacional.

Por gentileza, encaminhar currículo com pretensão salarial e nome da vaga para:


Milena Possidonio
Tel.: 8259-7252
Com.: 3093-4767

Vaga Urgente * Engenheiro Civil
Enviado por: "Milena Possidonio" milena.possidonio
Qui, 25 de Nov de 2010 12:57 pm

Bom Dia João Honório, como vai?

Mais uma vaga.

Engenheiro Civil III

É uma empresa multinacional, no segmento de telecomunicações que busca um

Formado em Engenharia Civil, com experiência de no minimo 3 anos em canteiro de
Irá realizar análises dos carregamentos de torres de estrutura metálica,
planejar etapa de obras, analisar laudos estruturais (laudos, projetos,

É necessário habilidade para o gereciamento eficaz de contratos e obras de
Indispensável experiência minima de um ano com cálculo estrutural.

Disponibilidades para viagens dentro do estado de SP.

Esse profissional irá contratar terceiros, fazer gestão de contratos, e liderar
projetos, atestará responsabilidade técnica.

Não é necessário experiência no ramo de Telecom.

Salário a combinar e todos os benefícios de uma multinacional.

Por gentileza, encaminhar currículo com pretensão salarial e nome da vaga para:


Milena Possidonio
Tel.: 8259-7252
Com.: 3093-4767

Milena Possidonio

Vagas de Auxiliar de serviços Gerais - SP.
Enviado por: "Rh Valoragregado" rhvaloragregado
Qui, 25 de Nov de 2010 12:58 pm

Vagas: Auxiliar de Serviços Gerais.
Nº de vagas: 10
Salário: R$ 621,00
Benefí­cios: Vale Transporte conforme itinerário preferencial residir as mediações de Barueri, Carapicuí­ba, Osasco, Vale Alimentação de R$ 7,00 p/ dia (Sodexo), Assist Médica Intermédica.
Escala: Segunda a Sábado
Horário: 06:00 as 14:20hs; 14:10hs as 22:30; 22:20 as 06:10; 08:00 as 17:00hs, 23:20 as 07:10 Disponibilidade de horário para horas extras aos Domingos, pois trabalham um domingo sim outro não, o Domingo pago como 100%.
Local de trabalho: Tamboré
Experiência: Em triagem, correios, etc...
Idade: Acima de 18 anos
Escolaridade: Segundo grau completo ou cursando
Conhecimentos: Saber copiar textos sem errar na grafia; De geografia,de localização de estados e regiões do Brasil; Â Boa caligrafia
Descrição das atividades: Â Efetuar a separação e triagem de cartas por destino (estado, municípios, regiões etc)
Efetuar a preparação de cargas. (Acondicionamento de cartas em caixas próprias por ordem de CEP)
Enviar curriculum para:

Código: Aux. Serviços Gerais.

Vagas - Instrutor/Professor Contabilidade
Enviado por: "Edmar Reis"
Qui, 25 de Nov de 2010 12:58 pm

Olá João.

Poderia por gentileza divulgar esta vaga também?




Instituição de treinamento com 10 anos seleciona:
02 instrutores para aula de Departamento Pessoal e
02 instrutores para Técnicas Contábeis.

No DP é tratado Admissão, Folha de Pagto e Demissão, com exemplos que
abrangem de 80 a 90% das situações mais comuns na área.

Na Contabilidade, tratamos a Contabilidade Comercial, iniciando com
conceitos básicos, documentos comerciais mais comuns e lançamentos mais
comuns, balanços, DREs, até PEPS, e UEPS.

As aulas são para *nível técnico/profissionalizante.*
Material didático completo fornecido para o aluno e prof.
*Perfil do Candidato:*
-Experiência na áreas de Dpto Pessoal e Contabilidade;
-Disponibilidade para dar aula aos *sábados* manhã e/ou tarde (as aulas
disponíveis são para sábados);
-Locais: na cidade de São Paulo, nos bairros da Móoca, Vila Guilherme e Casa
- Não é necessária experiência didática, mas sim bom conhecimento do
- Candidatos com as duas competências poderão dar aula nos dois cursos;
- Exigimos comprometimento e responsabilidade

- Ótima remuneração, principalmente para quem tem sábado livre e quer
incrementar a renda;
- Excelente ambiente de trabalho
- Excelente oportunidade, pois as turmas são ininterruptas, ao término de
uma turma já começa outra.

Enviar currículo para email com
assunto "INSTRUTOR" até 03/dez/10.

Enviado por: "João Honório" joaohonoriorh2004
Qui, 25 de Nov de 2010 12:58 pm

Prezado João Honório.

Peço que divulgue essa vaga, por gentileza.

Muito obrigado.

Vaga para São Paulo.

Estágio de BackOffice - Superior de ADM/Gestão de RH ou áreas
relacionadas. - Do primeiro ao penúltimo ano. - Excell
intermediário/avançado - Conhecimento em planilhas, relatórios
e resenha. - R$ 600,00 + VT + VR

Interessados enviar CV para

Bom dia João Honório,

Faço parte da Fit RH Consulting, consultoria especializada em
Hunting. Gostaria de anunciar algumas vagas que estou trabalhando. Caso
tenham alguma indicação, favor passar meus contatos aos candidatos.

Indústria de grande porte, localizada na região Central de SP,

Analista de Suprimentos/Projetos PL.

Profissional com experiência em desenvolvimento de projetos na
área de compras;

Desenvolvimento de novos fornecedores;

Abertura e acompanhamento de processos de compras, análise de custos.

Imprescindível experiência em SAP.

Formação superior completa.

Analistas de Logística para ocupar posições de nível Pleno e
Necessário ter experiência em:
- Propor e analisar projetos com metologia PMI;
- Elaboração de nova estrutura de movimentação interna,
armazenagem, manuseio de produtos INBOUND E OUTBOUND;
- Experiência com as áreas de Logística Internacional;
- IMPRESCINDÍVEL: Inglês avançado/ fluente
- DESEJÁVEL: Espanhol intermediário/avançado
Localização: São Paulo.
Interessados encaminhar cv atualizado para:

Desde já, agradeço e fico à disposição para maiores



Vagas - Instrutor/Professor Depto Pessoal
Enviado por: "Edmar Reis"
Qui, 25 de Nov de 2010 12:58 pm

Olá João.

Poderia por gentileza divulgar esta vaga?




Instituição de treinamento com 10 anos seleciona:
02 instrutores para aula de Departamento Pessoal e
02 instrutores para Técnicas Contábeis.

No DP é tratado Admissão, Folha de Pagto e Demissão, com exemplos que
abrangem de 80 a 90% das situações mais comuns na área.

Na Contabilidade, tratamos a Contabilidade Comercial, iniciando com
conceitos básicos, documentos comerciais mais comuns e lançamentos mais
comuns, balanços, DREs, até PEPS, e UEPS.

As aulas são para *nível técnico/profissionalizante.*
Material didático completo fornecido para o aluno e prof.
*Perfil do Candidato:*
-Experiência na áreas de Dpto Pessoal e Contabilidade;
-Disponibilidade para dar aula aos *sábados* manhã e/ou tarde (as aulas
disponíveis são para sábados);
-Locais: na cidade de São Paulo, nos bairros da Móoca, Vila Guilherme e Casa
- Não é necessária experiência didática, mas sim bom conhecimento do
- Candidatos com as duas competências poderão dar aula nos dois cursos;
- Exigimos comprometimento e responsabilidade

- Ótima remuneração, principalmente para quem tem sábado livre e quer
incrementar a renda;
- Excelente ambiente de trabalho
- Excelente oportunidade, pois as turmas são ininterruptas, ao término de
uma turma já começa outra.

Enviar currículo para email com
assunto "INSTRUTOR" até 03/dez/10.

Vaga: Porteiro para região Tamboré/SP
Enviado por: "Rh Valoragregado" rhvaloragregado
Qui, 25 de Nov de 2010 12:58 pm

Nº de vagas: 4
Salário: R$ 718,00
Benefícios: Vale Transporte conforme itinerário preferencial residir as mediações de Barueri, Carapicuíba, Osasco, Vale Alimentação de R$ 7,00 p/ dia (Sodexo), Assistência Médica Intermédica.
Horário: Todos 5x1 – 1 vaga 06:00 as 14:00, 2 vagas 14:00 as 22:00, 1 vaga 22:00 as 06:00 Disponibilidade aos finais de semana, pois como a escala de horário é 5x1, trabalha um sábado sim outro não, e um domingo sim outro não, no sábado trabalhado folga o domingo e vice versa.
Idade: Acima de 18 anos
Escolaridade: 2º grau completo,pode ser incompleto.
Conhecimentos: Noções de informática, comunicativo, proativo, experiência na área
Descrição das atividades: Controlar acesso a empresa / Emitir relatórios de ocorrências / Atendimento de ligações telefônicas, externas e internas / Efetuar cadastramento de visitantes e funcionários novos / Observar e orientar funcionários quanto normas comportamentais e disciplinares.
Código: Porteiro.

vagas na Bombril
Enviado por: "João Honório" joaohonoriorh2004
Qui, 25 de Nov de 2010 12:58 pm

Olá pessoal!

Seguem algumas vagas em aberto na Bombril:

1- Vendedores e Supervisores de Venda (várias vagas no Brasil
2- Assistente Fiscal para trabalhar em SBC
3- Analista Fiscal para trabalhar em SBC

Podem enviar cvs para meu e-mail:


Enviado por: "João Honório" joaohonoriorh2004
Qui, 25 de Nov de 2010 12:58 pm

Olá Sr Honório.

Considerar apenas este e-mail, pois Inclui uma informação.


Daiane Andrades

Consultora - Telerecursos

RH Internacional - A Randstad Company

Rua 24 de maio, 276, 4º andar

01041-000 - São Paulo/SP

Tel: (011) 3372-8500

De: Daiane Andrades
Enviada em: quarta-feira, 24 de novembro de 2010 12:34
Para: ''
Assunto: Vaga: Supervisor de planejamento - Inglês Fluente

Olá Sr. Honório

Segue anúncio de uma oportunidade em aberto.


Empresa Multinacional de Telecomunicação.

Vaga: Supervisor de Planejamento – Churn

Imprescindível: Inglês Fluente

Superior: Completo

Atividades: Utilização de sistemas específicos para levantamento
de informação; Análise dos cancelamentos de linha, voluntárias
e involuntárias; confecção de relatórios, propostas e
apresentação de custos detalhado;

Supervisão de equipe composta por 5 pessoas.

Se reportará para a diretoria e EUA.

Local: São Paulo/ SP

Média Salarial: R$ 5000,00 + todos os benefícios e VR de R$20,00

Os interessados deverão encaminhar o cv para
identificando no assunto o nome da vaga: Supervisor de Planejamento

Daiane Andrades

Consultora - Telerecursos
RH Internacional - A Randstad Company
oão poderia divulgar para mim ?
Desde já agradeço

Empresa de T.I atuante no mercado a mais de 23 anos.

Busca profissional

Todas as vagas são CLT e tem como benefícios VR, AM, AO, SV, VT,
Estacionamento no local , PLR

Para atuar na região da Aclimação próximo ao parque.

Interessados encaminhar cv com pretensão salarial e titulo da vaga

Analista de Rede

Formação MCP Windows 2003 Server ou superior;
Conhecimentos avançados nas plataformas ISA 2004/2006, Exchange
2003/2007 e Windows Server 2000/2003/2008;

Bons conhecimentos de DNS, DHCP, SMTP e serviços de rede;
Manutenção e Monitoramento de servidores;
Configuração e instalação de servidores e ferramentas de acesso
Implantação de rotinas de administração de redes e backup
Desejável certificação em ISA Server e Exchange Server

Experiência mínima de 1 ano na função.

Inglês técnico.

Programador Visual Fox Pró ou VB.Net

Bons conhecimentos na linguagem de programação exigida.

Conhecimentos com banco de dados SQL

Experiência com sistema ERP ( contábil, fiscal, tributos, contas
a pagar e receber....)

Conhecimento de instruções SQL (select, insert, update).

Andressa Antoni

End: Rua Pais de Andrade, 485

Enviado por: "João Honório" joaohonoriorh2004
Qui, 25 de Nov de 2010 12:59 pm

Perfil Estagiário de Demanda e Planejamento (multinacional de grande


· Estagiário de Demanda e Planejamento

· Área: SCM

· Reporte ao Supervisor de Demanda e Planejamento

Atividades principais:

· Atuar na área de planejamento e supply chain.

· Interagir com áreas afins, como Vendas, Produção e
Logística, oferecendo e recebendo informações a respeito do
processo de planejamento.

· Dar suporte no planejamento de demanda, materiais e
produção de longo prazo.

· Trabalhar grandes massas de dados e convertê-las em
informações úteis ao planejamento de demanda, materiais e

· Gerar relatórios de acompanhamento dos processos da

· Requisitar materiais e matérias-primas no sistema ERP.

· Realizar e manter o cadastro mestre no sistema ERP.


· Cursando Engenharia de Produção, preferencialmente ou
Administração de Empresas.

· Imprescindível domínio de sistemas de informática,
com foco especial em Excel e como diferencial, conhecimentos de
ferramentas de ERP.

· Aptidão com os temas de gestão de estoque e fluxo de

· Conhecimentos estatísticos.

· Desejável inglês.

· Pró-atividade.

· Bom raciocínio lógico e analítico.

· Organização e Planejamento.

· Excelente capacidade de comunicação oral e escrita.

· Boa capacidade para trabalhar sob pressão.

Remuneração / Benefícios:

· Bolsa: R$ 900,00 por mês

· Benefícios: - assistência médica

- assistência

- seguro de vida em grupo

- refeição (restaurante na empresa)

Os interessado deverão encaminhar cv para

Boa tarde João, Por gentileza pode fazer a divulgação abaixo..
Analista Contábil Júnior

Irá executar tarefas de análise de contas patrimoniais e de
resultado, fechamentos fiscais e auxiliar os demais departamentos da
empresa. Efetuar lançamentos diários de notas fiscais para
suprimentos estoques, conferir diariamente os pagamentos para
assistência técnica, elaborar arquivos magnéticos para os
escritórios contábeis. Prestar suporte fiscal para os
departamentos, contas a pagar, receber, logística, comercial, entre
outros. Efetuar a apuração e previsão de impostos municipais,
estaduais e federais, entre outras atividades pertinentes ao cargo.

Desejável experiência em análise de contas patrimoniais.

Ensino Superior em Ciência Contábeis;

Salário: R$ 1.800,00 á 2.200,00 + (VT, VR, AM, AO, parecerias com
faculdades, restaurantes e farmácias)

Local de trabalho: Metrô Santa Cruz

Enviar Cv:


Boa tarde!

Segue abaixo a oportunidade em aberto...


Dados da Empresa Contratante



Ramo de Atividade:

Recursos Humanos

Dados do Anúncio da Vaga


Atendente de Call Center Ativo


20 vagas


· Atuará na venda de Planos de Saúde.

· Desejável ter experiência com call center.

A Empresa está localizada no SACOMÃ

· Ensino Médio completo.

· Conhecimentos básico em informática.


· Salário: R$ 630,00 + comissão

· Benefícios: Vale - transporte, Vale Refeição

· Regime de contratação: CLT (Efetivo)

· Horário: Temos vagas para os períodos: manhã e tarde.

· Informações adicionais: Precisamos de candidato com total
disponibilidade de horário. Bom relacionamento interpessoal, boa
expressão verbal, vontade de crescer e comprometimento.

Consultora de Recursos Humanos

Boa Tarde Srs. Conforme combinado segue vagas para divulgação.
Controlador de acesso Região Norte (10 vagas) Região Sul (10
vagas) Região Oeste (10 vagas) Beneficios:Salário 635,80 + Hora
Hextra, Vale Transporte, Cesta Basica, Vale Alimentação, Seguro de
Vida e Assistencia Odontológica. Escala: 5X1 das 08:00 as 20:00
Horas. Porteiro Região Central (1 Vaga) ABC (6 vagas) e Osasco (5
vagas) Beneficios:Salário 635,80 Reais + Hora Hextra, Vale
Transporte, Cesta Basica, Vale Alimentação, Seguro de Vida e
Assistencia Odontológica. Escala: 12x36 das 07:00 as 19:00 ou 19:00
as 07:00 horas. Auxiliar de limpeza Região Central (1 vaga)
Guarulhos (1 vaga) Beneficios:Salário 520,00 Reais, Vale Transporte,
Cesta Basica, Vale Alimentação, Seguro de Vida e Assistencia
Odontológica. Escala: 5X2 das 08:00 as 17:00 Horas. Auxiliar de
Manutenção Predial Região central (2 vagas) Guarulhos (1 vaga)
Beneficios:Salário 700,00, Vale Transporte, Cesta Basica, Vale
Alimentação, Seguro de Vida e Assistencia Odontológica. Escala:
6X1 das 08:00 as 17:00 Horas Porteiro Região Barueri (4 Vagas),
Guarulhos (14 vagas) e São Caetono do Sul (10 vagas)
Beneficios:Salário 635,80 Reais + Hora Hextra, Vale Transporte, Cesta
Basica, Vale Alimentação, Seguro de Vida e Assistencia
Odontológica. Escala: 12x36 das 07:00 as 19:00 ou 19:00 as 07:00
horas. Atenciosamente, Marcio Ramos Maciel. Recrutamento &

Enviado por: "Vanessa Olivo rossi" van_olivo
Sex, 26 de Nov de 2010 9:45 am

Boa tarde João,

Peço divulgar as oportunidades para o grupo:

Empresa multinacional localizada na Zona Sul de São Paulo, imprescindível sólida experiência na área e liderança.
CLT, remuneração R$5.000,00 + bônus e variável+benefícios.

Empresa multinacional localizada na Zona Sul de São Paulo, imprescindível sólida experiência na área e liderança.
CLT, remuneração R$ 7.500,00 + bônus e variável+benefícios.

Candidatos ~que se enquandram no perfil deverão encaminhar e-mail para

Obrigada por ajudar mais uma vez!

Vagas TI - São Paulo
Enviado por: "Vivian Marques" vivian_recrutamento
Sex, 26 de Nov de 2010 9:45 am

A Verus Brasil, especializada em serviços de Tecnologia da Informação, está selecionando profissionais com os seguintes perfis:

Analista de Sistemas .Net Sênior
Experiência em Análise de Sistemas e Programação (50% e 50%)
Suporte aos desenvolvedores.
Análise e Desenvolvimento dos projetos da equipe de arquitetura.
Microsoft Visual Studio 2008/2010
Microsoft SQL Server 2005/2008
Programação C#/VB.Net - MS Framework .NET 2.0/3.0/3.5/4.0
Enterprise Architect
Rational ClearCase
Microsoft Team Foudation Server
Local: SP
Base: SP
Período: indeterminado
Início: imediato/negociável
Cód.: Analista .Net/01_SP

Analista de Sistemas Sênior
Experiência em Análise de Sistemas e Programação (80% e 20%)
Atuação no levantamento de requisitos, análise sistêmica e liderança de projetos.
Foco na integração do boleto eletrônico com o sistema de minutas.
Acompanhamento de solicitações junto a Fábrica de análise e desenvolvimento.
Desejável conhecimento de produtos bancários em geral e sistemas de frontoffice
VB .Net
SQL Server
Local: SP
Base: SP
Período: 6 meses
Início: imediato/negociável
Cód.: Analista Sist./06_SP

Analista de Configuração Junior
Montagem de versão e garantia da qualidade. Atuação direta no clear quest, clear case e VSTS voltado para testes de RTM (Roteiro de testes mínimos).
Desejável: Clear Case e Clear Quest
Conhecimentos básicos de .Net
Local: SP
Base: SP
Período: 1 ano
Início: imediato/negociável
Cód.: Analista Config./01_SP

Analista de Suporte a Sistemas Pleno
Desejável experiência anterior com instituições financeiras e/ou gestão de relacionamento com clientes.
Atuar como ponto focal para registro e feedbacks acerca de solicitações e/ou incidentes de sistemas ou infra que afete o uso destes, para um grupo específico de usuários (Mesas e Tesouraria), melhorando o relacionamento da TI com este público.
Transmitir com precisão e agilidade as informações recebidas para as equipes HD Sistemas / CAD / PAC.
Acompanhar o nível de satisfação dos usuários e acionar imediatamente os responsáveis em casos de crise.
Pacote Office - Intermediário - Mandatório
SQL - Básico - Mandatório
ITIL - Básico - Desejável
Local: SP
Base: SP
Período: indeterminado
Início: imediato/negociável
Cód.: Suporte Sist./01_SP

Analista de Sistemas Especialista Sênior
Formação superior completa em Análise de Sistemas
Definir arquitetura de solução para novos sistemas e módulos que construiremos em 2011
Desejável conhecimento em produtos bancários
Visual Studio
Power Design
Clear Quest
Clear Case
RUP e orientação a serviços (SOA)
Local: SP
Base: SP
Período: indeterminado
Início: imediato/negociável
Cód.: Arq. Sist./01_SP

Analista de Sistemas Especialista Sênior
Líder de Projetos de Garantias.
Acompanhamento dos projetos junto às fábricas.
Formação superior completa em Análise de Sistemas.
Profundos conhecimentos em produtos bancários. Empréstimos, Cambio, Derivativos e Garantias.
Visual Studio
Clear Case
Clear Quest
Power Design
Local: SP
Base: SP
Período: indeterminado
Início: imediato/negociável
Cód.: AS/Especialista/01_SP

Analista de Requisitos Sênior
Formação superior completa em Análise de Sistemas.
Levantamento de Requisitos, Elaboração de Documento de Visão, Protótipos de telas
Conhecimentos em Produtos Bancários (Empréstimos, Câmbio e Derivativos)
Clear Case
Clear Quest
Local: SP
Base: SP
Período: indeterminado
Início: imediato/negociável
Cód.: Analista Req./01_SP

Analista de Sistemas Sênior (Gestão de Projetos)
Analista de Sistemas Sênior com experiência em Gestão de Projetos
Experiência em Análise de Sistemas e Programação (70% e 30%)
Boa experiência em levantamento de requisitos;
Boa desenvoltura na geração de documentação de sistemas;
Experiência técnica para gerar propostas de soluções e validação de especificações técnicas;
Bom conhecimento em SQL para geração de massas de dados para testes.
Mapeamento da arquitetura e recursos necessários
Confecção do Documento de Visão
Provisão de todas as informações necessárias ao acompanhamento do projeto por PMO/Comitê Gestor
Validação dos artefatos gerados pela equipe parceira de projetos
Acompanhamento da fase de especificação técnica e de desenvolvimento
Controle de prazos e acompanhamento das entregas da empresa parceira (Reformulação do Histórico e versionamento de Minutas, Migração de Minutas para DOCX, Criação de Ambiente/Servidor específicos para o sistema)
Acompanhamento e Homologação do projeto (TI)
Acompanhamento da implantação em produção e pós implantação
Microsoft Visual Studio 2008
SQL Server 2008
Microsoft Power Point
Desejável MDS
Local: SP
Base: SP
Período: 4 meses (renovável)
Início: imediato/negociável
Cód.: ANS/GP/01_SP

Analista de Sistemas Sênior
Experiência em Análise de Sistemas e Programação (70% e 30%)
Derivativos (Swap, Opções, Ndf, Futuros)
Conhecer Contabilidade Matemática Financeira Domínio do idioma Inglês
Levantamento de Requisitos Testes e Homologação de sistemas Especificação técnica (Desenho Físico)
Excel (Cálculos)
Domínio de SQL Server (Transact SQL, Criação de Funções, Procedures, Views)
Domínio de modelagem UML
Power Designer
Local: SP
Base: SP
Período: indeterminado
Início: imediato/negociável
Cód.: Analista Sist./07_SP

Analista de Sistemas Sênior
Experiência em Análise de Sistemas e Programação (60% e 40%)
Integrador / Workflow
Alto grau de comprometimento e responsabilidade com as atividades/tarefas
Desejável vivência em Tesouraria (Mesa, Back, MIS)
Analise e desenho de arquitetura de integração
Construção e customização dos componentes de arquitetura de Processos de Negócios (BPM)
Planejamento e execução de testes(Unitários, integrados e Performance)
Perfil técnico (hands-on) na plataforma IBM Websphere (Websphere Message Broker, Websphere Process Server, Websphere Business Monitor, Websphere Registry & Repository e ferramentas de desenvolvimento (Websphere Integration Developer, Websphere Message Broker Toolkit, Websphere Monitor Model)
Conhecimento de arquitetura técnica SOA / Webservices e BPM. Padrões / políticas de desenvolvimento (design patterns, versionamento, nomenclatura, modelo canônico).
Conhecimento de Java, XML / Schema, messaging.
Preferencialmente certificado em Message Broker e/ou Process Server
Conhecimento de BPEL e BPMN.
Modelagem e documentação dos processos de negócios utilizando Websphere Business Modeler
Local: SP
Base: SP
Período: indeterminado
Início: imediato/negociável
Cód.: AS/PET/01_SP

Analista de Sistemas Sênior
Experiência em Análise de Sistemas e Programação (60% e 40%)
Integrador / Workflow
Alto grau de comprometimento e responsabilidade com as atividades/tarefas
Desejável vivência em Tesouraria (Mesa, Back, MIS)
Analise e desenho de arquitetura de integração
- Construção e customização dos componentes de arquitetura de integração
- Planejamento e execução de testes(Unitários, integrados e Performance)
Perfil técnico (hands-on) na plataforma IBM Websphere (Websphere Message Broker, Websphere Process Server, Websphere Business Monitor, Websphere Registry & Repository e ferramentas de desenvolvimento (Websphere Integration Developer, Websphere Message Broker Toolkit, Websphere Monitor Model)
Conhecimento de arquitetura técnica SOA / Webservices e BPM. Padrões / políticas de desenvolvimento (design patterns, versionamento,nomenclatura, modelo canônico).
Conhecimento de Java, XML / Schema, messaging.
Preferencialmente certificado em Message Broker e/ou Process Server
Conhecimento de BPEL e BPMN.
Modelagem e documentação dos processos de negócios utilizando Websphere Business Modeler
Local: SP
Base: SP
Período: indeterminado
Início: imediato/negociável
Cód.: AS/PET/02_SP

Analista BI Sênior
Alinhamento das demandas junto a área usuária, dimensionamento das atividades sob sua responsabilidade, definir melhor solução para resolver a demanda e desenvolver a demanda
Microsoft SQL Server2008
Analysis Services
Microsoft SQL Server 2008
Integration Services
Microsoft SQL Server 2008
Reporting Services
Conhecimentos em DW, Datamart, Cubos, modelagem multidimensional
Local: SP
Base: SP
Período: indeterminado
Início: imediato/negociável
Cód.: AS/BI/01_SP

OBS: Interessados, por favor, enviar o currículo em (Word) com a pretensão salarial, disponibilidade e o código da vaga de seu interesse para o e-mail:

Muito obrigada,
Aceito indicações


Vivian Marques
Recursos Humanos
Tel: 11 4208-4991 / 11 3515-7400
Consultoria Verus Brasil

divulgação de vagas
Enviado por: "daniela prota"
Sex, 26 de Nov de 2010 9:48 am

Boa tarde João!

Estou atuando em uma consultoria que promove recrutamento e seleção
exclusivamente de pessoas com deficiência.

Gostaria de sua ajuda para divulgar as seguintes vagas:

Auxiliar de vendas
- 2° grau completo
- experiência em atendimento

Horário: das 09:00 às 18:30 (2a a 6a), das 09:00 às 13:00 (sab)
Salário R$ 1000,00 à R$1.100,00 + benefícios
Local de trabalho: São Paulo (República)

Auxiliar de escritório
- Experiência administrativa
- 18 a 45 anos
Salário R$ 1000,00 + benefícios
Local de trabalho: Santo André

Operador de produção
- sem experiência
Salário :R$ 800,00 + benefícios
Local de trabalho: Santo André
Horário: a combinar

Auxiliar operacional (para atuar em loja)
- não é necessário ter experiência
Salário R$ 640,80 + VT, VR , assistência médica)
Horários: 10:00 às 16:00, 14:00 às 20:00, 16:00 às 22:00
Escala 6x1
Local de trabalho: diversos shoppings em São Paulo : Aricanduva, Anália Franco,
Morumbi, Ibirapuera,Plaza sul, Vila Olimpia, Paulista, Higienópolis, West Plaza,
Eldorado, Lar Center, e Santana.

Interessados encaminhar currículo para : ou - colocar o cargo desejado no campo assunto

ou entrar em contato através dos telefones : 3266-3021/3266-8158


Vagas Urgentes
Enviado por: "Marcia"
Sex, 26 de Nov de 2010 9:48 am

Sr.João Honorio , solicito ajuda urgente .

A Mister Sheik , devido a varias inaugurações contrata

Ajudante de cozinha / panificação / atendente/caixa

Solicito que seja encaminhado para Rua do Bosque 929 – Barra Funda ou para o
email ou fone: 21971300

Curriculum de 7 homens para trabalhar com experiência de ajudante de cozinha
– faixa etária – 25 a 35 anos

Salário : R$ 712,00 + Refeição no local + VT

Horario de trabalho : 13:00 as 22:00 hrs ( segunda a sexta )

Se tiver experiência melhor , mas pode ser só noção de cozinha

Importante morar próximo ao local de trabalho.

Preciso de mais 3 homens com o mesmo perfil para trabalhar no shopping
Center Norte

E três moças mesma faixa etária para ser atendente/caixa

Importante é saber lidar com maquina de café expresso

Morar próximo ao local de trabalho

Salario : R$ 560,00

Preciso de rapazes mesma faixa etária para trabalhar na cozinha em Diadema (
Shopping Praça da Moça ) e Largo 13 (Shopping Mais Largo 13 )

R$ 560,00

Marcia Regina da Silva


Recursos Humanos

Rua do Bosque, 929

Barra Funda - São Paulo - SP - 01136-000

Tel/Fax: (11) 2197-1300



Marcia Regina da Silva


Recursos Humanos

Rua do Bosque, 929

Barra Funda - São Paulo - SP - 01136-000

Tel/Fax: (11) 2197-1300



Enviado por: "Karina Lourenço" kalourenco.selecao
Sex, 26 de Nov de 2010 9:48 am

Boa tarde pessoal,

ECLIPSE IT consultoria especializada em Tecnologia da Informação, busca com urgência profissionais com perfil abaixo. E neste momento gostaríamos de compartilhar com vocês nossa demanda.

Caso a mesma não esteja dentro de seu perfil, sintam-se a vontade para indicar estas oportunidades aos seus contatos! Pois, toda indicação será super bem-vinda!!!

Desejável Superior completo em Gestão de Recursos Humanos ou Administração. (Cursando/Completo)
Imprescindível experiência nas rotinas do Depto Pessoal e Legislação Trabalhista.
Diferencial conhecimento sistemas ERP módulo de Pessoal, e parametrizações de eventos para suportar necessidade das áreas Contábil e Financeira.
Local de trabalho: São Bernardo do Campo / Contratação: CLT + Benefícios
Início imediato
Ø Realizar as atividades referentes à administração de pessoal, respondendo pela guarda de documentos, registros, movimentação, apuração e pagamento do pessoal, entre outros, garantindo o atendimento a todas as exigências legais pertinentes a sua área, inclusive durante ações da Fiscalização;
Ø Admissão, demissão, geração de folha de pagamento, férias, décimo terceiro salário, ponto eletrônico, manutenção do cadastro de empregados e outros;
Ø Validar o fechamento da folha de pagamento, promover o cálculo de encargos, créditos de pagamento de salários, crédito as férias, via sistema de folha de pagamento, para fazer cumprir a legislação trabalhista, previdência e resguardar o direito de todos os profissionais e da empresa;
Ø Analisar a apuração e os relatórios da folha de pagamento informando o valor à área financeira para aprovisionamento do montante solicitado para crédito em conta corrente dos funcionários;
Ø Atender e esclarecer dúvidas dos funcionários da área trabalhista, explicando e detalhando origem e fatos geradores de proventos e descontos em folha de pagamento, com o objetivo de evitar constrangimentos e insatisfações para os mesmos;
Ø Acompanhar as fiscalizações trabalhistas e previdenciárias, providenciando a documentação necessária, apoiando a área jurídica quando da ocorrência de autuações;
Ø Representar a Empresa junto aos Sindicatos de classe, justiça do trabalho, demais órgão oficiais e junto a clientes quando se fizerem necessários esclarecimentos e orientações específicas;
Ø Supervisionar o controle dos exames médicos periódicos dos funcionários da Empresa, através do Programa de Controle Médico e Saúde Ocupacional (PCMSO) para atender as exigências do Ministério do Trabalho;
Ø Elaborar e preencher os relatórios legais, bem como controlar o envio dos documentos exigidos pelos órgãos oficiais (DIRF, RAIS, CAGED), apurar impostos e contribuições sociais, controlar e apurar os registros de ponto dos empregados;

Enviado por: "Karina Lourenço" kalourenco.selecao
Sex, 26 de Nov de 2010 9:49 am

Bom dia pessoal,

ECLIPSE IT consultoria especializada em Tecnologia da Informação, busca com urgência profissionais com perfil abaixo. E neste momento gostaríamos de compartilhar com vocês nossa demanda.

Caso a mesma não esteja dentro de seu perfil, sintam-se a vontade para indicar estas oportunidades aos seus contatos! Pois, toda indicação será super bem-vinda!!!

Desejável Superior completo em Gestão de Recursos Humanos ou Administração. (Cursando/Completo)
Imprescindível experiência nas rotinas do Depto Pessoal e Legislação Trabalhista.
Diferencial conhecimento sistemas ERP módulo de Pessoal, e parametrizações de eventos para suportar necessidade das áreas Contábil e Financeira.
Ø Realizar as atividades referentes à administração de pessoal, respondendo pela guarda de documentos, registros, movimentação, apuração e pagamento do pessoal, entre outros, garantindo o atendimento a todas as exigências legais pertinentes a sua área, inclusive durante ações da Fiscalização;
Ø Admissão, demissão, geração de folha de pagamento, férias, décimo terceiro salário, ponto eletrônico, manutenção do cadastro de empregados e outros;
Ø Validar o fechamento da folha de pagamento, promover o cálculo de encargos, créditos de pagamento de salários, crédito as férias, via sistema de folha de pagamento, para fazer cumprir a legislação trabalhista, previdência e resguardar o direito de todos os profissionais e da empresa;
Ø Analisar a apuração e os relatórios da folha de pagamento informando o valor à área financeira para aprovisionamento do montante solicitado para crédito em conta corrente dos funcionários;
Ø Atender e esclarecer dúvidas dos funcionários da área trabalhista, explicando e detalhando origem e fatos geradores de proventos e descontos em folha de pagamento, com o objetivo de evitar constrangimentos e insatisfações para os mesmos;
Ø Acompanhar as fiscalizações trabalhistas e previdenciárias, providenciando a documentação necessária, apoiando a área jurídica quando da ocorrência de autuações;
Ø Representar a Empresa junto aos Sindicatos de classe, justiça do trabalho, demais órgão oficiais e junto a clientes quando se fizerem necessários esclarecimentos e orientações específicas;
Ø Supervisionar o controle dos exames médicos periódicos dos funcionários da Empresa, através do Programa de Controle Médico e Saúde Ocupacional (PCMSO) para atender as exigências do Ministério do Trabalho;
Ø Elaborar e preencher os relatórios legais, bem como controlar o envio dos documentos exigidos pelos órgãos oficiais (DIRF, RAIS, CAGED), apurar impostos e contribuições sociais, controlar e apurar os registros de ponto dos empregados;
Local de trabalho: São Bernardo do Campo
Contratação: CLT + Benefícios
Início imediato

Interessados encaminhar Cvs referenciando seu cargo de atuação e pretensão salarial para


Karina de Jesus Lourenço
Recursos Humanos
Eclipse IT Consultoria em Informática
Tel: +55 11 4083-7888

Vaga: Porteiros (as) - ABC
Enviado por: ""
Sex, 26 de Nov de 2010 9:49 am

Boa tarde João! Por favor divulgue a vaga abaixo.

Porteiros (as)

20 vagas
Com ou sem experiência, pois oferecemos treinamento.
Sexo masculino ou feminino
De 25 à 55 anos
Disponibilidade de horário
Residir em Santo André, São Caetano do Sul, São Bernardo do Campo, Diadema ou Mauá, pois nossos postos são na região do ABC.

Salário + Vale transporte, vale alimentação, cesta básica, assistência médica, assistência odontológica, seguro de vida.


Divulgação de vaga - Consultor Interno em RH
Enviado por: "Renata Voltolini" renatavoltolini
Sex, 26 de Nov de 2010 9:49 am

João, boa tarde.

Gostaria de divulgar uma vaga para o grupo:

Ultragaz, empresa de grande porte, líder no segmento que atua, presente em todo o território nacional, busca profissionais com vasta experiência e com disponilidade para viagem constantes:

Consultor Interno de Recursos Humanos

Principal objetivo

Planejar as ações de recursos humanos e instrumentalizar os gestores nos processos de gestão de pessoas, para uma diretoria específica, visando o atendimento aos objetivos estratégicos de forma alinhada às políticas em vigência e melhores práticas / tendências de RH.
Principais Atribuições e necessidades

Assessorar os gestores na gestão de suas equipes, mediante o monitoramento das ações das áreas clientes, análise crítica de demandas, fornecimento de insumos / ferramentas, participação direta nas ações e apoio nos processos de tomada de decisões;
Contribuir com o desenvolvimento dos gestores, através da realização de acompanhamentos individuais e avaliação de resultados;
Participar no desenvolvimento de projetos específicos de RH, mediante o planejamento, coordenação das ações, favorecimento do processo de comunicação e avaliação dos resultados;
Contribuir com a manutenção dos modelos e políticas de RH em vigência, mediante o monitoramento das ações junto aos clientes, sinalização de divergências e propostas de solução;
Contribuir com o alinhamento dos sistemas de RH às necessidades da empresa e tendências de mercado;
Garantir a gestão de parceiros alinhada às diretrizes da empresa, mediante a prospecção de fornecedores, negociação de contratos, monitoramento de performance;
Contribuir com as reuniões de resultado e orçamento dos clientes, através da análise dos custos relativos a pessoal e apresentação de proposta, visando a alavancagem do desempenho das equipes;
Desenvolver estudos pertinentes, nos prazos e padrões de excelência requeridos;
Contribuir com o cumprimento às políticas, normas, procedimentos e preceitos éticos estabelecidos pela empresa, em sua área de atuação.
Autonomia ;
Visão estratégica,
Inglês será um diferencial
Interessados, enviar cv pelo link :


Vaga Conferente - SP
Enviado por: "anapaulaanna" anapaulaanna
Sex, 26 de Nov de 2010 9:50 am

Boa tarde!

Somos uma empresa de grande porte e buscamos profissionais com o seguinte perfil:

- Conferente.

- Experiência com separação e conferência de mercadorias
- Desejável experiência em câmara fria (atuará em câmara fria).
- Ensino Médio Completo.
- Salário R$ 930,00 + VT, VR R$ 6,00, Assistência Médica, Odontológica e Seguro de Vida em Grupo.
- Horário de trabalho: De segunda a sexta- feira das 13:00 ás 22:00 e sábado das 11:00 ás 15:00.

- Os interessados deverão encaminhar cv para: - com o título de Conferente.

Enviado por: "Alessandra Alves" alealvesduarte
Sex, 26 de Nov de 2010 9:50 am

Estou com 01 vaga para Coordenador(a) Administrativo

Local de Trabalho: Rua Groenlândia - Jardim Europa

Regime de Contratação - PJ



Descrição das Atividades:
ü Coordenar as equipes: Administrativas, Cobrança Documental, Arquivo,
Auditoria de Documentos, Tesouraria;
ü Apresentar indicadores e relatórios da área para a Diretoria;
ü Apresentar melhorias no processo de trabalho;
ü Ter conhecimento em sistemas operacionais;
ü Motivar a equipe;
ü Trazer resultados significativos para a empresa.

Desafios do Cargo:
ü Reestruturar a área juntamente com a Diretoria Administrativa, trazendo
resultados no curto prazo para a empresa.

ü Estudar o processo Administrativo da empresa.
ü Apresentar indicadores de desempenho da área.

Interessados enviar currículo com o Assunto: COORDENADOR(A) ADMINISTRATIVO aos
cuidados de Alessandra para:

Alessandra Alves

Enviado por: "Tony's Video Locadora" drena1106
Sex, 26 de Nov de 2010 9:50 am





Enviado por: "Solucao cinco Promocoes e eventos" selecao.solucao5
Sex, 26 de Nov de 2010 9:55 am

Boa tarde.

Segue vaga para divulgação.


Temos ao todo 12 vagas, espalhadas por todas as regiões de São Paulo.

Fará abordagem de clientes, limpeza e organização do ponto de venda, conquista de novos espaços na loja e relatórios.

A empresa oferece salário inicial de R$650, premiação até R$300, Vale Refeição e Vale Transporte.

É necessário ter alguma experiência em promoção ou vendas.

Interessados (as) devem se cadastrar no site ou enviar currículo para com o assunto PROMOTOR (A) DE HOME CENTER.


Mirella Sena

Vendedor Interno de Ar Condicionado URGENTE
Enviado por: "Nidiane Rodrigues"
Sex, 26 de Nov de 2010 9:55 am

Boa Tarde João!

Poderia anunciar a vaga abaixo, por favor?

Boa Tarde Grupo!

A Soma Frio Ar Condicionado está contratando:


Tarefas: Vendas de Equipamentos e Serviços (instalação, manutenção corretiva e preventiva). Coordenação da equipe operacional.

Necessário: Ensino Médio Completo

Experiência em Vendas do Produto (Ar Condicionado) e Noções Administrativas.

Local de Trabalho: Barra Funda.

Horário: 2º a 6º das 09:00 as 19:00

Benefícios: VT e VR

Salário: Fixo + Comissão.

Observação: Disponibilidade para inicio imediato

Enviar currículo URGENTE para:

Muito Obrigada.

Buscamos Consultor(a) de R&S
Enviado por: "Roseana Romualdo" roseana.romualdo
Sex, 26 de Nov de 2010 9:57 am

Olá Amigos !

Estamos buscando identificar um ou uma Profissional com experiência em R&S
que possa atuar como PARCERIO(A), tanto nos processos de forma geral, como
também em um produto específico que possuímos, o Outsourcing de Recrutamento
e Seleção.

Nossa empresa é focada apenas em processos seletivos para a área de
Tecnologia, abrangendo Informática, Telecomunicações e Negócios.
Portanto trata-se de um público bem específico, mas estamos flexibilizando
mesmo para pessoas que não tenham experiência nesta área.
O importante é que o ou a Profissional possa atuar em Parceria conosco.

Para quem tiver interesse, pedimos para que nos encaminhe os dados
(curriculum) e a disponibi
          Apple-IBM deal threatens Android's enterprise push        
The new Apple-IBM partnership seems sure to help Apple sell more iPads to businesses, but it may also be setting off alarm bells at mobile device management companies large and small.
          Lexmark by IBM 13T0101 és 10E0043 toner - Jelenlegi ára: 10 000 Ft        
Eladó külön-külön eredeti gyári Lexmark by IBM 13T0101 és/vagy 10E0043 festékkazetta. Eredeti, gyári, bontatlan termékek 1 hónap cseregaranciával. Az ár egységár!
13T0101: Lexmark Optra E310 / E312 / E312l (E-310 / E-312 / E-312-l) high capacity toner cartridge / magy kapacitású festékkazetta, nyomtató patron (0013T0101 / 6000 oldal / így is ismerheted: 13T0301 - 3000 oldal)
10E0043: Lexmark Optra C710 / C710n / C710dn (C-710 / C-710-n / C-710-dn) black toner cartridge / fekete festék-kazetta (0010E0043 / 10000 oldal) AKCIÓS ÁRON!!
Lexmark by IBM 13T0101 és 10E0043 toner
Jelenlegi ára: 10 000 Ft
Az aukció vége: 2017-08-31 22:24
          D P BuZZ - September 29, 2011        

Join Larry Jordan and Mike Horton as they go live to the trade show and talk with:

Karen Everett, Founder, NewDoc Editing

Karen Everett is a documentary story consultant. She helps documentary producers figure out how to craft their stories to catch the eyes of distributors and viewers. She has discovered there are two “templates” that are very successful today and, after she gave her presentation at the Digital Video Conference, we sat down with her to learn the highlights of her approach.

Bruce Masters, Senior Program Manager, IBM

Increasingly, producers are turning to LTO tape for archiving projects. In fact, some networks are now accepting LTO for final program delivery. Bruce Masters, Senior Program Manager for IBM discusses Ultrium – the consortium of IBM, HP, and Quantum that are the driving force behind LTO. What is it? How can it help us? And where is it going in the future?

Mark Pastor, Strategic Business Manager, Quantum

Whitney Lippincott, Business Alliance Manager, HP Storage

Bruce Masters set the scene about LTO tape. Mark Pastor and Whitney Lippincott provide specific examples of how it is being used in the real world. We also spend time talking about specific techniques you can use to be successful using LTO tape to archive and access your projects.

Sam Thomas, Business and Development Manager, Petrol Bags, Division of Vitec Group

Petrol Bags is relatively new to the industry. If you need to carry cameras or other gear from one place to another, these are products you need to check out. Sam Thomas joins us to talk about their newest bag – the Cambio – which provides a built-in tripod and internal lighting system!

Ned Soltz, Contributing Editor, DV and Videography magazines

What can we say? It’s Ned Soltz. Ned looks at the new camera technology on display at DV Expo with an eye on showcasing new cameras and camera accessories and his surprising discovery of removable lenses for the iPhone 4!

Doug Pircher, General Manager, International Supplies

Every year at Digital Video Expo, Doug Pircher introduces us to products we’ve never heard of that become part of our regular production tools. This year is no exception – Doug has scoured the earth to find a great collection of intriguing products that can simplify your life — or at least make it more fun.

Larry Laboe, Executive Director, NewFilm Makers – Los Angeles

It is easy, at a trade show, to get totally captivated by technology. But the purpose of filmmaking is to make films that other people want to watch. That’s where Larry Laboe comes in – he heads New Filmmakers – Los Angles, an organization that is part user group and part film festival. And a GREAT place to see films. Join us as Larry explains.

          NAB Show 2011 - Day 3 - April 13, 2011        

Here's the third in our series of Special Reports.

Join host Larry Jordan, live from our booth on the show floor, as he talks with:

Bruce Master,Senior Program Manager, IBM & Laura Loredo, Product Manager, HP.

Nick Rashby, President, AJA.

Philip Storey, CEO, Xendata.

Vincent Maza, Worldwide Marketing Manager, Avid.

Jim Tierney, CEO, Digital Anarchy.

Gary Arlen, President, Arlen Communications.

Donn Gurule, President, Lightbeam Systems.

Michael Kammes, Post-production Consultant, Key Code Media.

          Las capacidades cognitivas marcarán la diferencia en la transformación digital del sector 'retail' y la distribución        

El 97% de la distribución en Europa Occidental está llevando a cabo o a punto de comenzar proyectos de transformación digital, según IDC, un "proceso imparable" del que IBM ha destacado que las capacidades cognitivas marcarán la diferencia.

          The Role Computer Support Plays in Our Lives        

Due to the many different stages in computer development, it's hard to pin point who exactly pioneered the idea for modern day computers, but it's plain to see that they have come a long way from the first IBM home computers and the MS-DOS Microsoft operating system of the 1980's. Just think about the first desktop computers that were available and now look at the laptops many of us carry around in our bags.

These advances in software and machinery have meant that computers have become more affordable and more widely available, which is a wonderful thing. However, it means that the machinery will be in the hands of less experienced or less technically minded people and therefore, some assistance is needed. This is where computer support comes in. The developers of hardware and software realised that as well as an instruction manual, some technical support would be needed on a more personal level.

Computer support started with experts helping to set up computers by installing software and maintaining and repairing the machinery, but with the invention of the internet and the many things that go with it, the role of computer support technicians has grown.

Today, they can help with anything from setting up a wireless internet network for your home or office, repairing laptops or desktop computers, installing and maintaining software and protecting your computer from viruses. In fact, any aspect of computer ownership can be aided with the help of a computer support technician in this day and age.

Many companies have now been set up purely for computer support as it is a good source of income for many people. The advancement of computers and technology has opened up the possibility for a whole new industry to be built on computer support alone. Even computer support teams have become necessary for the running of most large companies in order to keep their offices running smoothly.

This serves to remind us that although modern day computers are revolutionary and make most aspects of our lives much easier, they also control most aspects of our lives and can wreak havoc if they break down. Just think back to the last time the computers went down in your office. Many calls to computer support are made but when they are still not functioning after an hour or so, panic ensues, followed by everyone leaving work early. The fact that no work can be done when the office computers aren't working shows how heavily our world has come to rely on these machines and in turn, how much we rely on computer support to keep them, and our lives, up and running.

Article Source:
          UK High Court decision involving patentable subject matter - computer programs        
The U.K. High Court (Chancery Division, Patents Court) in a recent decision has rejected a claim that software that enables data to be transferred from one computer on which the data is stored, to another machine connected remotely is patentable.  This decision is important as the UK patent law has language similar to the Indian patent law as relates to patentability of “computer software”, and because the decision clarifies the procedure to be used while determining a technical effect.  The discussion is divided into two parts.  This post discusses the UK decision and a subsequent post would discuss the application of this decision in the Indian context.  Long post (part 1/2) follows.

Lantana Ltd. had applied to the U.K. Intellectual Property Office (UKIPO) for a patent related to an "electronic data retrieval system" involving the transfer of data between two different computers.  At a hearing involving the patentabilty of Lantana’s application, a UKIPO officer rejected Lantana's application as the claimed subject matter of the invention was not patentable.  The rejection was made on the ground that the claimed subject matter related to "a computer program as such".  Lantana appealed to the UK High Court on the ground that the UKIPO officer had erred in considering the invention as not patentable.
Claim 1 from Lantana’s (UK application) is reproduced below:
Claim 1 (key parts highlighted):
An electronic data retrieval system comprising a local station, a remote station, a packet switched network to provide a transmission path between the local station and the remote station, and a machine-readable data storage device storing retrievable data files including machine-readable data representing at least one of a visual product and an audio product,
wherein said local station includes:
a data store storing a plurality of machine-readable data retrieval criteria identifying data files among said retrievable data files stored at said machine-readable data storage device to be retrieved;
a packet switched network interface connected to said packet switched network;
a user interface co-operable with said data store and interactable with a user, to enable selection by the user of one or more machine-readable data retrieval criteria; and
an electronic processor configured to produce, in response to the selection by the user of the one or more machine-readable data retrieval criteria, a first e-mail message including the selected one or more machine-readable data retrieval criteria together with a machine-readable instruction for retrieving data files, among said retrievable data files stored at said machine-readable data storage device, using the selected machine-readable data retrieval criteria, and to send the first email message to the remote station via said packet switched network interface and said packet switched network;
wherein said remote station includes:
a packet switched network interface connected to said packet switched network to receive the first e-mail message from the packet switched network;
a filter adapted to parse the first e-mail message to determine whether the first e-mail message includes any machine-readable instruction and any data retrieval criteria; and
an electronic processor to execute the first machine-readable instruction, and upon execution of the machine-readable instruction and in accordance with the selected machine-readable data retrieval criterion, retrieve the one or more required data files among said retrievable data files stored at said machine-readable data storage device from the machine-readable data storage device, produce one or more second e-mail messages, the one or more second e-mail messages including the retrieved one or more data files as one or more attachments, and send to said local station, via the packet switched network interface of the remote station, and the packet switched network, the one or more e-mail messages and one or more attachments.
This claim may be summarized as follows: A user retrieves data from machine B, connected via a packet switched network, from a machine A  - using an email message that contains machine B readable instructions to retrieve the data.  The email is sent from machine A and machine B emails the results / data to machine B.
Applicable Law
Under European patent law, and UK patent law (as applied) an invention involving a computer program is patentable if it has a "technical contribution". 
Under Section 1 (2) of the (UK) Patents Act 1977, certain exclusions to patentability are provided.  Section 1(2) of the UK patent act implements Article 52 of the European Patent Convention.  Under the UK patent law, to qualify for a patent, the invention must be (1) new, (2) have an inventive step (or not obvious), and (3) be useful to the industry.  However, like section 3 of the Indian patent act, under the UK patent law under section 1(2), an invention cannot be patented, if it is " a discovery, scientific theory, mathematical method, ..schemes, rules, methods for performing mental acts, …programs for computers.”

Patentability of computer programs, had before Lantana’s case, been heard in the UK Court of Appeals in HTC v Applecase.  The HTC v Apple case reviewed the existing case law in the UK including Aerotel Ltd v Telco Holdings Ltd; Symbianv Comptroller-General of Patents and the guidelines issued by the European Patent Office.
According to the case as discussed, under UK patent law, to determine whether there is technical effect, the four stage approach (as provided under Aerotel) to determine technical effect is :
i) properly construe the claim;
ii) identify the actual contribution;
iii) ask whether it falls solely within the excluded subject matter;
iv) check whether the actual or alleged contribution is actually technical in nature.

The judge also considered signposts (provided in AT&T Knowledge Ventures)that may indicate whether there is any technical effect:
i) whether the claimed technical effect has a technical effect on a process which is carried on outside the computer;
ii) whether the claimed technical effect operates at the level of the architecture of the computer, that is to say whether the effect is produced irrespective of the data being processed or the applications being run;
iii) whether the claimed technical effect results in the computer being made to operate in a new way /
iv) whether the program makes the computer a better computer in the sense of running more efficiently and effectively as a computer;
v) whether the perceived problem is overcome by the invention as opposed to merely being circumvented.

The UK IPO officer rejected Lantana’s claim as it claimed programs for computers and that there was no technical effect of the Lantana’s computer program.  Lantan appealed arguing that the UKIPO officer had misapplied the law.
Lantana argued that the invention claimed had technical effect.  Lantana argued that the EPO decision in IBM CORP T6/83 held that a method of communication between programs and files held at different processors within a known network was patentable.  However, the judge did not accept this argument stating that what was patentable in 1988 does not mean that any method of communicating between programs and files on different computers over a network necessarily involves a technical contribution today.
The judge also considered the factors involved in determining technical effect and found that they did not assist Lantana.  The judge held …  “[However] the fact the claim is novel and inventive is not the determinant of whether it satisfies Art 52 EPC (requirements for patentability).  Being novel and inventive is not what takes a contribution outside the excluded area nor is it what makes an effect or contribution "technical".
Lantana relied on the following four effects:
(i) telecommunications messages are generated by computers forming part of a telecommunications network, and transmitted from one computer to another over the network;
(ii) one computer remotely controls the processing performed by another via a telecommunications network;
(iii) the result of this remote control is the transmission of files and information from the remote computer over a telecommunications network to the local computer;
(iv) this remote control and transmission is achieved in a manner which does not require a continuous connection between the two computers.

The judge considered all four and found that the first effect did not help Lantana as the invention involved … “communication between two computers over the internet, and everything is going on inside the computer.  Lantana claimed that the second effect involved one computer remotely controls another.  The judge did not accept this argument and rather held that it was one computer is sending an email message to another.  Similarly, the third effect – remote control over a telecommunications network – was rejected by the judge as it was nothing new.    Similarly, for the third effect – that it common (at the time of Lantana’s invention) that files or information are transferred from one computer to another over a telecommunications network.  With respect to the fourth effect, the UKIPO officer held that the use of email was merely circumventing the problem and not solving it, and the judge accepted it. 

The judge considered the five signposts and held that the Lantana’s application had no technical effect.  The first sign post did not help Lantana as the operation was inside the computer.  The second did not help Lantana as the program did not operate at the architecture level.  Rather, the data is retrieved remotely by piggy-backing on the operation of an e-mail application. The judge distinguished the case of Symbian where a computer program was patentable because it allowed a computer to operate on other programs faster.  The third did not either because there was no new way of operation.  The fourth signpost also did not help because neither of the two computers or the network being intrinsically was more reliable as a result of Lantana’s program.  The final signpost did not help as Lantana tried to circumvent the problem.

The key take away from this case is that for a claim to be patentable in the UK, it must be novel, inventive, useful and have technical effect.

          Interview with IBM’s Mike Moran, Distinguished Engineer and Author        
Stephan Spencer, Founder and President of Netconcepts, sits down with Mike Moran, a search marketing pioneer at IBM who helped grow’s organic search traffic from about 1% to 25% within a few years. The interview is focused on implementation … Continue reading
           La señal wow y la hipótesis de Antonio París        
Fundación y operación del radio telescopio Big Ear en Ohio

Fue un radiotelescopio de "tipo Kraus" ubicado en los terrenos del Observatorio Perkins de la Universidad Wesleyana de Ohio entre 1963 a 1998. Conocido como Big Ear ("Gran oreja" en castellano), el observatorio formaba parte del proyecto SETI (búsqueda de inteligencia extraterrestre). La construcción del Big Ear comenzó en 1956 y fue completada en 1961, siendo finalmente operativo en 1963.

El radiotelescopio es diferente al que habitualmente tenemos en mente, con un enorme plató. El Big Ear fue diseñado por el físico estadounidense John Daniel Kraus (1910 - 2004).

El observatorio completó el Ohio Sky Survey en 1971, y desde 1973-1995, Big Ear se utilizó para buscar señales de radio extraterrestre, convirtiéndolo en el proyecto SETI más longevo de la historia. En 1977, el Big Ear recibió la "señal wow". El observatorio fue desmontado en 1998, el terreno fue utilizado para ampliar un campo de Golf cercano. 

Señal wow

¡wow! Fue una fuerte señal de radio de banda estrecha recibida el 15 de agosto de 1977 por el radiotelescopio Big Ear, cuando el telescopio se utlizaba para apoyar el proyecto de búsqueda de inteligencia extraterrestre. La señal parecía venir de la constelación de Sagitario.

El astrónomo Jerry R. Ehman descubrió la anomalía unos días después, mientras revisaba los datos registrados. Estaba tan impresionado por el resultado que rodeó la lectura en la copia impresa del ordenador y escribió el comentario "wow" a su lado, dando lugar al nombre ampliamente mencionado del acontecimiento.

En 1977, Ehman estaba trabajando en el proyecto SETI como voluntario; Su trabajo consistía en analizar a mano grandes cantidades de datos procesados por medio de una computadora IBM 1130 e impresos en papel perforado. Los datos fueron del 15 de agosto a las 22:16 EDT (02:16 UTC).

La secuencia alfanumérica "6EQUJ5" muestra la variabilidad de la intensidad de la señal. La escala empieza de el 1 al 10 y continúa de A - Z. Teniendo su inicio en "6" alcanzando su pico "U" y disminuyendo "J5".

Un error común es que "la señal wow" constituye algún tipo de mensaje. De hecho, lo que se recibió parece ser una señal de onda continua no modulada sin información codificada; Esencialmente un destello de energía de radio. La cadena "6EQUJ5" es meramente la representación de la variación esperada de la intensidad de la señal en el tiempo, expresada en el sistema de medición particular adoptado para el experimento.

Frecuencia 1420 MHz  

A través de convenios internacionales, se han reservado varias bandas de frecuencias especiales para la Radioastronomía. Quizás la más importante de estas bandas incluya la emisión de los átomos de hidrógeno a 1420 MHz. La banda "protegida" se extiende desde 1400 MHz a 1427 MHz con el fin de permitir las observaciones de gas hidrógeno moviéndose a un rango de velocidades.

Se han dado dos valores diferentes para la frecuencia de la "señal wow": 1420,36 MHz (J.D. Kraus) y 1420,46 MHz (J.R. Ehman),


La ubicación exacta en el cielo, donde la señal aparentemente se originó, es incierta debido al diseño del telescopio Big Ear, que incluía dos receptores, cada uno apuntando en una dirección ligeramente diferente, siguiendo la rotación de la Tierra. "La señal wow" fue detectada por uno de los receptores pero no por el otro y los datos fueron procesados de tal manera que es imposible determinar cuál de los dos recibió la señal.

Buscando el origen de la señal

Se han propuesto diversas explicaciones. Como que es la señal de un satélite artificial destinado al espionaje, un destello producido por un evento estelar muy intenso, una señal emitida desde tierra y reflejada por basura espacial, etcétera. Pero la que ha causado más controversia es la hipótesis de Antonio París, donde propone que fue originada por la actividad de dos cometas.

¿Quién es Antonio París?

Antonio se describe a si mismo como 

"Es profesor adjunto en el St. Petersburg College, FL. Además, es candidato a Astronauta para la Misión Suborbital del Proyecto PoSSUM apoyado por el "NASA flight opportunities program";  Director de Planetarios y Programas Espaciales del Museo de Ciencia e Industria de Tampa, FL; Y el Científico Jefe del "Center for planetary science" - un programa de divulgación científica que promueve la astronomía, la ciencia planetaria y la astrofísica para la próxima generación de exploradores espaciales. El profesor Paris, además, se graduó en 2015 en el Programa de Educación de Marte de la NASA en el Mars Space Flight Center de la Universidad Estatal de Arizona. Actualmente es el investigador principal en el "sitio B" con un radio telescopio de 10 metros en la Florida Central investigando los rayos gamma y los agujeros negros.”
Y continúa 
“Es miembro de la Academia de Ciencias de Washington y de la Sociedad Astronómica Americana.”. 
Hipótesis del cometa

En el año 2016 el señor París sugirió que los cometas P/2008 Y2 (descubierto el 1 de diciembre de 2008) y 266P/Christensen (descubierto el 27 de octubre de 2006) estaban en la misma área del cielo en dónde se detectó la señal wow. Sugiere que las nubes de hidrógeno que envuelven los cometas producirían una señal en la frecuencia de 1420 MHz. En el presente año París informaba que apuntó su radio telescopio de 10 metros al cometa P/2008 Y2 y descubrió que emitía a una frecuencia de 1420.25 MHz. París afirma que se aseguró que la señal provenía del cometa y no de otras fuentes (como pulsares y núcleos galácticos activos). Lo comprobó desplazando el telescopio 1° viendo desaparecer la señal a 1420.25 MHz y al reposicionarlo volvió aparecer. Lo siguiente que hizo fue seleccionar aleatoriamente tres cometas y estudiarlos. Tras realizarlo descubrió que emitían a la misma frecuencia de radio de 1420 MHz.

Escepticismo en la comunidad científica

Paris da a conocer su publicación en su página web “Planetary Science” y nos explica lo siguiente: 
“La revista de la “Washington Academy Of Science” es el órgano oficial del mismo nombre. Está revista con revisión por pares publica una investigación científica original e incluye el trabajo científico de diez premios Nobel”. 
Cuesta creer que un trabajo científico que resuelve uno de los grandes misterios de la radio astronomía no sea publicado en Nature, Science o Astrophysical journal. Que el trabajo haya sido publicado en esa desconocida “revista” nos indica que no pasó por una revisión rigurosa. Desafortunadamente la mayoría de medios de comunicación en castellano y algunos en inglés, se hicieron eco del trabajo de París y en las redes sociales se esparcio la idea errónea que el “misterio de wow” estaba resuelto. 

La opinión de Seth shostak

Me puse en contacto con Shostak del Instituto SETI, recibiendo una interesante respuesta: 
“No, eso no tiene sentido. Y el principal radio astrónomo en el ex Observatorio de Radio del Estado de Ohio lo explicó hace casi un año. 
También ... piense en ello. El telescopio tenía DOS cuernos de alimentación (dos receptores). El segundo miró el mismo punto en el cielo 70 segundos después del primero, y no vio la señal. Los cometas no se mueven lo suficientemente rápido como para salir de la viga en 70 segundos. No tiene sentido.”

Correo electrónico recibido el 7 de junio 
Esto mismo expone Shostak en un artículo publicado en la web del SETI. Se titula “Was it ET on the line? Or just a comet?”. 13 de junio de 2017 -

Las interrogantes de Chris Lintott

Chris es profesor de astrofisica en el departamento de física de la universidad de Oxford. Fue uno de los primeros en mostrar su escepticismo con el trabajo de París. En twitter publicó un documento en donde enumera algunas preguntas dirigidas a París.

“Wicho” del sitio “microsiervos” realizó la traducción y desde aquí le agradecemos.

  1. No hay datos suficientes sobre el equipo utilizado como para que otros científicos puedan intentar reproducir el experimento.
  2. Con los datos que da el estudio acerca del instrumento utilizado se deduce que un objeto situado en la región que observaban tardaría unos cinco minutos en cruzar su campo de visión, pero todas las señales supuestamente detectadas son más cortas que eso. ¿Por qué?
  3. No se sabe cómo se protegieron contra las interferencias; no se sabe ni siquiera dónde estaba instalado el equipo. De hecho no se puede descartar que no fuera el Sol lo que detectaron.
  4. Si la señal detectada es tan fuerte –tan fuerte como el núcleo de la galaxia–, ¿por qué ningún otro cometa ha mostrado jamás una señal similar?
  5. De hecho, ¿por qué 266P/Christensen muestra actividad de radio, en especial teniendo en cuenta que durante las observaciones estaba lejos del Sol y por tanto inactivo? ¿Qué proceso la produce? La mera presencia de hidrógeno no la explica; hay hidrógeno por todas partes en el universo pero no produce emisiones de radio porque sí.
  6. ¿Por qué era necesario observar el cometa en la misma porción del cielo que cuando se detectó la señal Wow!? ¿No tiene más sentido observar cometas que estuvieran a la misma distancia del Sol que 266P/Christensen cuando se detectó la señal?
  7. Hay discrepancias en las posiciones estimadas por Paris para 266P/Christensen el 15 de agosto del 77 y la estimadas por otros astrónomos, que dicen que en aquella fecha no estaba en la región del cielo que él dice.
  8. Y aún si Paris realmente ha detectado una señal, esta es distinta en intensidad, ancho de banda y duración a la señal Wow!, así que es complicado ver cómo se relacionan.


La opinión de Yvette Cendes

Yvette es una radio astrónoma que reside en Holanda. Publicó un artículo en Reddit, donde ofrece sus impresiones del trabajo de París.

Además de hacer mención a varios de los puntos que señaló Chris, hay uno que me parece importante. Tiene que ver con la medición de la fuerza de la señal en decibelios.

Ivette comenta:
“Nunca he utilizado JAMÁS dB en un paper, ni he leído nunca un artículo en Radio Astronomía que mide la intensidad de la señal en dB (excepto tal vez en el contexto de un artículo de instrumentación que describe los sistemas de un radiotelescopio, es decir, no la ciencia, pero si la ingeniería.) Utilizamos una unidad diferente en astronomía para la densidad de flujo, el Jansky (Jy), donde 1 Jy= ?230 dBm/(m2·Hz). (DB es una escala logarítmica, y Jansky no lo es)”.

La reacción de Antonio Paris

Desde el inicio Antonio presumió en Twitter haber “resuelto” el misterio de "wow", bromeó diciendo que tal vez los ET estaban en el cometa, compartía las notas donde hacían mención de él. Pasaban las horas y empezaban aparecer las primeras voces críticas, París guardaba silencio y no se daba por enterado.

El comunicado del SETI, la opinión de Ehman, el artículo de Yvette y el escepticismo de la comunidad científica era cada vez más conocido. ¿Y qué hizo París?

El 15 de junio publicaba un breve texto que decía lo siguiente:
"The Center For Planetary Science" es plenamente consciente de algunas críticas, específicamente de SETI y del personal del ex radiotelescopio Big Ear, con respecto al documento "Wow!". El "Center For Planetary Science", sin embargo, está de acuerdo con sus hallazgos y no será presionado para sacar el paper de la imprenta. El propósito de la ciencia, incluido nuestro artículo, es suscitar más preguntas, lo que a su vez conduce a más ciencia. No estamos de acuerdo con SETI y el staff del Big Ear.”
Parece una broma, pero desafortunadamente no lo es.


El mismo dia, en Twitter, Paris desprecia la publicación de Yvette. “Reddit no es una fuente creíble de información..." comentó. Esperemos que París ya esté recuperado después de morderse la lengua al escribir eso.


En un tweet del 17 de junio, París comenta que “ "wow" es una fuente de ingresos para el SETI, por eso buscan que permanezca en el misterio". Ese argumento tendría sentido si su publicación no tuviera muchos fallos. Además al insinuar que SETI tiene la culpa, París muestra victimización.


El 8 de junio le escribí por correo a París. Le cuestione por qué no publicaba en una revista de alto impacto, además qué opinaba del punto de vista de Jerry Ehman. Su respuesta fue un copy/paste de lo que viene en su página web "The Washington Academy of Science... Diez premios Nobel... bla bla bla". Le expresé que eso a mi no me convencía, y que me parecía que a él le preocupa la revisión por pares de una revista de alto impacto. Me reitera que está seguro de sus conclusiones, no hay dudas.

Me rendí a continuar, evadía mis preguntas. Lo último que le pregunté es ¿Y qué sigue?. París buscará por qué los cometas emiten a 1420, eso será el próximo año.

¿Qué hemos aprendido?

Está controversia nos ha dejado algunas lecciones.
Los medios siguen sin revisar la fiabilidad de las fuentes y no consultan a los especialistas en el tema. Algunos promotores del escepticismo en su afán de echar por tierra todo lo que pueda ser extraterrestre o de difícil explicación dentro de la ciencia, caen promocionando este tipo de publicaciones. París es culpable de aumentar el desprecio a los investigadores amateur.


Información del Big Ear

Información sobre la frecuencia 1420 Mhz

Sobre la intensidad de la señal y la escala alfanumérica

Información sobre la señal wow!_signal

Información de Antonio París

Traducción del abstract de la publicación de Antonio

Publicación de Antonio París

Publicaciones críticas del trabajo de Antonio.

Artículo de Yvette Cendes (estudiante de doctorado de  astronomía)

Comunicado del SETI escrito por Seth Shostak

Traducción de "wicho" de los cuestionamientos realizados por Chris Lintott.

          How much ALM do you need for cloud applications?        
ALM (Application Lifecycle Management) means different things to different people, and these views are largely influenced by tool vendors. IBM users may bias their view of ALM to things that the Rational toolset is good at — say requirements traceability and Java-oriented modelling. Microsoft users may see ALM as being about using TFS (Team Foundation […]
          3048 Intel Core2Duo T7200 cpu processzor laptop IBM Lenovo T60 T61 Dell D630 D830 - Jelenlegi ára: 2 499 Ft        
Csomagküldés hétfőnként!
Ha azonnali csomagküldést kérsz, előttte kérdezz, nem minden esetben tudom előbb küldeni.
Személyes átvétel telephelyemen lehetséges, előre egyeztetett időpontban.
A vásárlásról alanyi áfa mentes számlát állítok ki.
Csere/beszámítás személyes átvétel esetén lehetséges.
Alkatrészek beszerelése és tesztelése telephelyemen kérhető. Ennek feltételei I+ oldalamon olvashatóak.
Vásárlás előtt mindenképp olvasd el i+ oldalamat.
667mhz fsb 2ghz 4m cache 64bit
socket M
http: //www. cpu-world. com/sspec/SL/SL9SF. html
3048 Intel Core2Duo T7200 cpu processzor laptop IBM Lenovo T60 T61 Dell D630 D830
Jelenlegi ára: 2 499 Ft
Az aukció vége: 2017-08-27 15:08
          Year in review: 2012        
I’d sketch this, but Adobe Illustrator CS6 keeps crashing on me and I’m tired of fighting with my computer today. Next time! This year was about experiments. After building up my “opportunity fund,” I turned over my projects at IBM and left to start a 5-year experiment exploring what you can learn and build if […]
          Comment on Notes / Domino 10 by David Hablewitz        
I keep hearing that it's too hard to make a change in the number to the left of the decimal. This appears to be a bigger problem than Y2K. version 9.0 in 2013 version 8.5 in 2008 version 8.0 in 2007 version 7.0 in 2005 version 6.5 in 2003 version 6.0 in 2002 version 5.0 in 1999 In the technology industry, if you aren't moving forward, you're falling behind. Or more accurately, if you don't appear to be moving forward, you're falling behind. Did IBM handcuff itself so badly by building a wall of red tape around the version control that for all practical purposes it can't create new versions? It sounds like IBM's legal and marketing departments need to catch up to the 21st century. Perhaps Watson could help with that? Whatever happens, we cannot go on forever saying "version 9.0.1 feature pack XX".
          Comment on Join Me on IBM Watson Workspace by Ryan Bedino        
Can you invite me as well? Thanks a bunch!
          Comment on Join Me on IBM Watson Workspace by Jurjen van den Broeck        
Hi, Please invite me: jurjen.vdbroeck Thnx!
          Comment on Join Me on IBM Watson Workspace by Sam David        
Hello Notesguy, Can you please invite me? Please let me know how I can send you my IBM ID in a private message. Thank you!
          Comment on The future of transportation with IBM Watson by DanS        
NVIDIA is in the self-driving car game too "like having 150 MacBook Pros in your trunk" Tesla uses it (plus sensors and their own software)
          Comment on Join Me on IBM Watson Workspace by Luis Benitez        
Since you were asking.. it's neither alpha nor beta.. just Preview :)
          Comment on IBM Connect 2017: Finally Comes to the West Coast! by Reinhard        
Nice to hear IBM Connect is getting up again in US, greetz from europe
          Comment on IBM Connect 2017: Finally Comes to the West Coast! by David Hablewitz        
The good ol' days when we had a dev conference and an admin conference as well as Lotusphere. Let's hope this marks a change in direction and momentum.
          After Math: Do you see?        

It was an illustrative week for machine vision. Sony's high-speed eyes allow robots to see at 1000 FPS, IBM trained a neural network to spot schizophrenia, and MIT's AI knows what's in your meal just by looking at it. Numbers, because how else do you measure your myopia?

          IBM's AI can predict schizophrenia by looking at the brain's blood flow        

Schizophrenia is not a particularly common mental health disorder in America, affecting just 1.2 percent of the population (around 3.2 million people), but its effects can be debilitating. However, pioneering research conducted by IBM and the University of Alberta could soon help doctors diagnose the onset of the disease and the severity of its symptoms using a simple MRI scan and a neural network built to look at blood flow within the brain.

          Ð¡ÑƒÐ¿ÐµÑ€ÐºÐ¾Ð¼Ð¿ÑŒÑŽÑ‚ер моделирует солнечные пятна        

Модель солнечного пятна Хотя и похожая на Глаз Саурона из "Властелина колец", эта компьютерная модель солнечного пятна далека от выдумки. Она была получена группой исследователей, которые записали настолько сложные уравнения, что их может решить только суперкомпьютер (а точнее, IBM Bluefire, который выполняет 76 триллионов операций в секунду).

В уравнения входят данные об энергии, гидродинамике, магнитной индукции и других физических явлениях, которые определяют природу солнечных пятен – областей интенсивной активности на Солнце. Солнечные пятна извергают из себя заряженную плазму, которая вызывает геомагнитные бури и выводит из строя системы связи и навигации на Земле.

Центральную темную часть пятна называют тенью, более яркие части – гранулами. Удлиненные волокна тянутся из пятен во внешние полутеневые области.

Моделирование динамичных солнечных сил поможет ученым лучше понять, какие процессы вызывают появление солнечных пятен, а также того, как они влияют на Землю.

Tracy Staedter, Discovery News

          IBM Polska i Winuel stworzyły wspólnie rozwiązane dla energetyki         
Firmy stworzyły pakiet oprogramowania klasy Smart Metering. Został opracowany w Laboratorium Oprogramowania IBM w Krakowie. Teraz jego prototyp jest testowany w laboratorium Big Blue we Francji.
          BMC kupuje MQSoftware         
BMC przejmuje firmę MQSoftware zajmująca się oprogramowaniem warstwy pośredniczącej, która wspomaga monitorowanie oprogramowania IBM WebSphere MQ, jak również innych platform.
          Top 10 Web Vulnerability Scanners        
berikut daftarnya:

1. Nikto : A more comprehensive web scanner
Nikto is an open source (GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 3200 potentially dangerous files/CGIs, versions on over 625 servers, and version specific problems on over 230 servers. Scan items and plugins are frequently updated and can be automatically updated (if desired). It uses Whisker/libwhisker for much of its underlying functionality. It is a great tool, but the value is limited by its infrequent updates. The newest and most critical vulnerabilities are often not detected.

2. Paros proxy : A web application vulnerability assessment proxy
A Java based web proxy for assessing web application vulnerability. It supports editing/viewing HTTP/HTTPS messages on-the-fly to change items such as cookies and form fields. It includes a web traffic recorder, web spider, hash calculator, and a scanner for testing common web application attacks such as SQL injection and cross-site scripting.

3. WebScarab : A framework for analyzing applications that communicate using the HTTP and HTTPS protocols
In its simplest form, WebScarab records the conversations (requests and responses) that it observes, and allows the operator to review them in various ways. WebScarab is designed to be a tool for anyone who needs to expose the workings of an HTTP(S) based application, whether to allow the developer to debug otherwise difficult problems, or to allow a security specialist to identify vulnerabilities in the way that the application has been designed or implemented.

4.WebInspect : A Powerful Web Application Scanner
SPI Dynamics' WebInspect application security assessment tool helps identify known and unknown vulnerabilities within the Web application layer. WebInspect can also help check that a Web server is configured properly, and attempts common web attacks such as parameter injection, cross-site scripting, directory traversal, and more.

5.Whisker/libwhisker : Rain.Forest.Puppy's CGI vulnerability scanner and library
Libwhisker is a Perl module geared geared towards HTTP testing. It provides functions for testing HTTP servers for many known security holes, particularly the presence of dangerous CGIs. Whisker is a scanner that used libwhisker but is now deprecated in favor of Nikto which also uses libwhisker.

6.Burpsuite : An integrated platform for attacking web applications
Burp suite allows an attacker to combine manual and automated techniques to enumerate, analyze, attack and exploit web applications. The various burp tools work together effectively to share information and allow findings identified within one tool to form the basis of an attack using another.

7.Wikto : Web Server Assessment Tool
Wikto is a tool that checks for flaws in webservers. It provides much the same functionality as Nikto but adds various interesting pieces of functionality, such as a Back-End miner and close Google integration. Wikto is written for the MS .NET environment and registration is required to download the binary and/or source code

8. Acunetix WVS : Commercial Web Vulnerability Scanner
Acunetix WVS automatically checks web applications for vulnerabilities such as SQL Injections, cross site scripting, arbitrary file creation/deletion, weak password strength on authentication pages. AcuSensor technology detects vulnerabilities which typical black box scanners miss. Acunetix WVS boasts a comfortable GUI, an ability to create professional security audit and compliance reports, and tools for advanced manual webapp testing.

9. Rational AppScan : Commercial Web Vulnerability Scanner
AppScan provides security testing throughout the application development lifecycle, easing unit testing and security assurance early in the development phase. Appscan scans for many common vulnerabilities, such as cross site scripting, HTTP response splitting, parameter tampering, hidden field manipulation, backdoors/debug options, buffer overflows and more. Appscan was merged into IBM's Rational division after IBM purchased it's original developer (Watchfire) in

10. N-Stealth : Web server scanner
N-Stealth is a commercial web server security scanner. It is generally updated more frequently than free web scanners such as Whisker/libwhisker and Nikto, but do take their web site with a grain of salt. The claims of "30,000 vulnerabilities and exploits" and "Dozens of vulnerability checks are added every day" are highly questionable. Also note that essentially all general VA tools such as Nessus, ISS Internet Scanner, Retina, SAINT, and Sara include web scanning components. They may not all be as up-to-date or flexible though. N-Stealth is Windows only and no source code is provided.
          NYC’s Wafels and Dinges makes its debut at Kennywood Amusement Park this summer        
It’s a simple menu, pairing waffles with a variety of sweets ranging from dulce du leche to strawberries Ten years ago, Thomas DeGeest quit his job as a consultant at IBM to focus on waffles. …
          A Wize Move for IBM        
Last month, we successfully exited our investment in Storwize through its sale to IBM. IBM had been working with Storwize for some time on technical validation and joint marketing initiatives, and gradually became convinced that Storwize was a critical enabling technology that could disrupt the entire storage segment and including their own product lines (why rent when you can own?).

This relatively quick exit scenario for BVP was completely in line with our original investment thesis that data reduction technologies, such as Storwize’s primary storage compression, will ultimately prevail due to the considerable economics at stake(I have another outstanding bet in Sepaton). Primary storage capacity optimization was the latest data reduction trend for the simple reason that it was the most challenging. The stringent performance requirements of primary storage and the mission critical applications that depend on it meant that a optimization solution could not introduce any latency or performance degradation. So when pundits told me during my due diligence process that Storwize’s claims were inconceivable and that several large companies had failed in the past attempts, I decided it was a risk worth taking with a crack Israeli team. Storwize was truly an “amazing” company, as just getting the product to work would be highly accretive to shareholders, due to the complexity of the problem, the elegance of the solution and the enormous end market. 

As an investor, one always has mixed feelings about selling a company. After years of investing money, time and energy, your involvement comes to a sudden and unceremonious halt as you promptly resign from the board and resume being a curious spectator. The entire Storwize team did an outstanding job in bringing the company to an inflection point, where shareholders had to decide between accepting an attractive acquisition offer or raising more capital to invest in growth and R&D. Storwize was in the pole position, had some incredible things under development, and had strong leadership across the board. 

But it is no secret that building an enterprise storage company into a big business is exceedingly difficult and expensive, especially from Israel with our distance from the customers and overindulgence in technology. If IT departments are naturally conservative, storage departments are among the most unadventurous of the lot, always favoring the large incumbent even if they do overcharge and overpromise. And contrary to what one might think, a deal like this with IBM doesn’t appear out of thin air due to hype or fear, but only after years of hard work to overcome skepticism and even disregard that it really works. Many still don’t and won’t believe, but that is the natural reaction to amazing companies until their product is considered mainstream.

Storage was never an easy market to penetrate, and for Storwize it would not be an exception, all of which led us to conclude that the acquisition offer made sense. I am sure the success of Storwize and IBM’s Israeli recent storage acquisition binge will entice more storage entrepreneurs and investors, and all I can say is that this is not for the faint of heart. 

My hat is off to the entire Storwize team including Ed Walsh, Gal Naor, Yoni Amit, Ori Bauer, Mary Henry, Steve Kenniston, and Tzahi Shahak, not to mention many others in sales, marketing, support, product and development. It was truly a team effort across geographies, disciplines and cultures, and I was proud to be an investor and board member. Thanks for the ride, and good luck to you all!

          Hyperion DRM Consultant - Veterans Sourcing Group - Tampa, FL        
O IBM Tivioli Maestro or other enterprise job scheduling software experience. Hyperion DRM Consultant....
From Veterans Sourcing Group - Thu, 15 Jun 2017 03:30:22 GMT - View all Tampa, FL jobs
          Hyperion DRM Administrator / Developer - Veterans Sourcing Group - Tampa, FL        
O IBM Tivioli Maestro or other enterprise job scheduling software experience. Hyperion DRM Administrator / Developer....
From Veterans Sourcing Group - Thu, 15 Jun 2017 03:30:20 GMT - View all Tampa, FL jobs
          Onsite lead for Monitoring and Job Scheduling services - Fujitsu - Cambridge, MA        
O Lead the team to migrate job schedules from Maestro to Rundeck. Experience with job scheduling tools Rundeck and IBM Workload Scheduler (IWSd - also known as...
From Fujitsu - Tue, 08 Aug 2017 18:21:07 GMT - View all Cambridge, MA jobs
          Tivoli Workload Scheduler - Veterans Sourcing Group - Cary, NC        
Supports IT partner groups with Distributed Scheduling needs utilizing IBM Workload Scheduler (IWSd aka Maestro). Knowledge of Linux systems is a requirement....
From Veterans Sourcing Group - Fri, 04 Aug 2017 03:20:14 GMT - View all Cary, NC jobs
          Sony and IBM Team to Secure Education Data with Blockchain        
Sony has developed a new educational platform that uses IBM Blockchain to secure and share student records.
          Cockroaches of Stay More        
Cockroaches of Stay More
author: Donald Harington
name: Lawyer
average rating: 4.17
book published: 1989
rating: 5
read at: 2014/12/25
date added: 2016/07/19
shelves: 2014, arkansas, contemporary-southern-literature, donald-harington, group-read, on-the-southern-literary-trail, religion, satire, southern-humor, stay-more-novel, humor
Cockroaches of Staymore: A Place in the Choir

Cockroaches of Staymore by Donald Harington was chosen as a group read by On the Southern Literary Trail for December, 2014. Special thanks to Trail Member William who nominated this novel.

 photo Roosterroach_zps7d8ea8f4.jpg
The Cockroaches of Staymore, First Ed., Harcourt Brace Jovanovich, New York, New York, 1989

 photo Harington_zpsaff43bba.jpg
Donald Harington, December 22, 1935-November 7, 2009

All God's critters got a place in the choir
Some sing low and some sing higher,
Some sing out loud on a telephone wire,
Some just clap their hands, or paws, or anything they've got now--

Bill Staines, 1966

Cockroaches of Staymore is my third visit to Staymore, Arkansas. With each visit, I have been sad to leave it. I have wished that I could Stay More, as its inhabitants are known to implore you to do. not that they genuinely mean it. It's a recognized courtesy in that little community, a compliment you pay to the value you attach to the members of your community and your guests to it. If you stop and think about it, not many of us have that attitude towards our company these days. We don't say it, but our silent thought is, "When the Hell, ya'll gonna get home? Time's a wastin'. The wife's not too sleepy. The supper's done. I might just get lucky tonight. Too bad, buddy, if it's not in your stars tonight. Well, ever dog has his day. Too damn bad if this ain't yours." But we keep that to ourselves. Humans have a way of reading our unsent signals though. The way we cut our eyes, look at our watch. Cut the volume up a little on the TV. Mutter a little something about needing to get an early start on tomorrow's day. And before you know it, the party's over.

But in Staymore, well, in Staymore, things just move at a little slower pace. It's nice. Folks just never make you feel like you're being hurried along. That's nice. Don't you think?

I discovered Staymore, Arkansas, and its creator Donald Harington as a result of reading an issue of Oxford American Magazine, the Journal of Fine Southern Writing. Harington was recognized as the winner of the Oxford American's first Lifetime Achievement Award for Southern Literature in 2006. Oxford American and its fine staff have frequently put remarkable works in my hands. I owe it to them for connecting me to Donald Harington.

My first visit was what I believed to be the first Staymore novel, Lightning Bug. I knew immediately I had fallen into the hands of a master author who held me spellbound, the creator of a world in which I longed for, to live in, to escape to, to never leave. My review is here. Lightning Bug.

I quickly realized that it was easy to establish the order in which the "Staymore" novels were published. Almost simultaneously I discovered that the plots of the dozen or so tales do not flow chronologically from a historical perspective. If you've not ventured into Harington Country before, I'd actually recommend you start with The Architecture of the Arkansas Ozarks. For here are the origins of the very founding of the town, its early history, and its earliest residents. For background on Staymore, here's my review. Architecture of the Arkansas Ozarks.

When William nominated Cockroaches of Staymore, my immediate reaction was Trail Members visiting Staymore for the first time would possibly think Mr. Harington had taken a trip with Carlos Castaneda or Timothy Leary. In the most benevolent light, first time Harington readers would view him as a man whose cheese had slid from his biscuit. I am ever indebted to my good friend Jeffrey Keeten who acquainted me with this expression. It has frequent application. I appropriate it with proper attribution--of course.

After all, this is a book about cockroaches. Or, as these critters are referred to--roosterroaches. "Cock" has such negative connotations in polite society, roach society, that is. Though, sex is a very naturally received fact of life among them, both male and female. And the intricacies of the courtship are quite...intricate, shall we say? Ah, pheromones do make things much less complicated. Much more natural. Shall we say spontaneous? Among us human kind, spontaneity can be such a squelching factor in these days and times. How does your calendar look tonight? Not good. Is tomorrow good for you? Uhm...We have dinner with the .... then. OH.

Of course, I lived through what I'm told was a sexual revolution completely oblivious of one having taken place. Late bloomer. Well, you can't go home again. So it goes.

And we silly humans. Has anyone figured out why Man and Woman are in separate bath tubs in those Cialis ads? Oh. And those little blue, purple, and yellow pills that the ads tell all us guys over forty that we probably need? The average act of roosterroach coitus takes three hours. Uhm...and males roosterroaches have three, know. And they don't have to go to the emergency room if those thingies are uhm...inflated in excess of three hours. Don't anybody get titillated out there.

But this is what we're talking about people! Would you read a book about these?

 photo Friends_zpsc701c60a.jpg
American cockroach, Periplaneta americana, one of the oldest life forms on earth. They have been with us forever.

Be honest. You have a flyswatter in your home. Right? There's that can of RAID under the kitchen sink. You've laughed at the RAID commercials. You like your Orkin Man. You are not an organic gardener. Bugs make your tomatoes ugly. You believe in better living through chemistry. Down at the lawn and garden center you are known as "Ortho Man." You know that's you. You think Donald Harington's a Nutcase!

But, my friends, for you are all my friends, I must disabuse you of your preconceived notions, your biases, your prejudices. You are wrong.

This is something, I know, not easily accepted. So, we will take this in little steps. Consider it an exercise in gentle desensitization.

First, think of that little photograph above and think of those two insects being in love. Betrothed to one another. They're singing a little song.

Oh, we don't know what's coming tomorrow
Maybe it's trouble and sorrow
But we'll travel the road
Sharing our load side by side

There now. Think about it. Now, we're going to take a little break to let all of you think about this. Actually, I'm being threatened with my life by the Queen and Cousin Kathleen, who is much like the Queen. Together they are they who must be obeyed. And we shall continue this upon my return to FREEDOM!

Having sung, "Let my people go" numerous times, only to be ignored or given baleful stares, I am free. Cousin Kathleen is busily packing. Her flight out leaves this afternoon. I do hate to see her go. Really, I wish she would stay more. I have told her so. She has replied in kind that the Queen and I should just fly back to Dallas with her and stay there a spell. We finally wound the discussion down with the general agreement that we would do this again real soon. That's true Staymoron style.

So, back to Harington's highly original and inventive Cockroaches of Staymore. These critters, you will discover, are quite like us humans. Actually, Harington probably used them as an example to us, pointing out just how foolish we men and women can be.

The world of the roosterroaches in inexorably intertwined with that of the humans of Staymore. And the roosterroaches have taken on the class structure of Staymoron society. Each of the little critters is a familiar of the former human residents of Staymore.

At the high end of roosterroach world are the Ingledews, just as it was in the human society of Staymore. It was the Ingledews that founded the town after all. And all the other former residents of Staymore have their roosterroach doppelgangers.

However, things are not as they once were in Staymore. The town, once teeming with its citizens is now abandoned except for the presence of two humans. One is a man, an outsider, Larry Brace, living in "Holy House" as it is known to roosterroach society as encouraged by Brother Chid Tichborne, the Reverend Frockroach who preaches the Gospel of Joshua H. Chrustus, Son of Man. Man is no less than Larry Brace.

It's only natural that the roosterroaches worship Man. For it is on the refuse of Man on which the roosterroaches survive. Religion can get right complicated. Brace's house is Holy house because he, uhm...drinks alcohol. A lot of it. And when he is far gone in his liquor, when he sees a roosterroach skittling across the floor to what they call the cooking room, he pulls out a revolver and lets off a round or two. So, Man's House is Holey because Larry has shot it full of holes. In the process, Larry's wild stray rounds may blast away an unfortunate roosterroach. Tichbourne explains that the departed has "gone West," been "Raptured," and gone to live at the Right Hand of Man.

Frankly, Brace has become a rather undependable "Lord." Tichbourne thinks of changing worship from that of Man to that of Woman. The other human residing in Staymore is Sharon, the granddaughter of Latha, former Postmistress of Staymore, owner and operator of the town's General Store, and the heroine of Lightning Bug. Sharon lives in Latha's former residence which she shares unknowingly with the Ingledew roosterroaches. The Old Squire has a cabinet in the kitchen, where the best victuals in Staymore are to be found. His son, Sam, has taken up quarters in an eight day clock overlooking Sharon's bed. Sharon's home is known to the roosterroaches as "Parthenon."

Sam Ingledew is an exceptional roosterroach. Consider him as a non-Chrustian, an Existentialist. Sam refers to himself as Gregor Samsa. Ring a bell? For all his self perceptions, Sam has managed to fall in love with Sharon and wonders what it might be like to make love with her. That would be quite a metamorphosis. To linger over Sharon's face as she sleeps, he has lived in the clock too long. The chimes of the clock have made him deaf. Once again, Harington inserts a bit of himself into his own novel. Harington lost his hearing almost completely at the age of twelve. He has previously appeared as such characters as "Dawny" in Lightning Bug where he was hopelessy in love with Latha.

 photo leda-and-the-swan--sensual-giovanni-rapiti_zps8c6a041c.jpg
Leda and the Swan, Giovanni Rapiti: Stranger Metamorphoses have happened. Right?

The pickings for roosterroaches in Holey House are becoming slim. Man has become an unpredictable provider. Frockroach Tichbourne develops a scheme to convince lowly Jake Dingletoon that he is in fact an Ingledew, entitled to claim kin to the Old Squire and Sam Ingledew. If Tichborne can insert Dingletoon into Parthenon, generous, but slow witted Jake will open Parthenon to all the roosterroaches of Staymore.

Harington artfully interweaves the roosterroaches' lives with those of Larry and Sharon. Roosterroach society is divided when Frockroach Tichborne decides to worship Woman instead of Man. And Tichborne will stop nothing short of "INSECTICIDE" to put his plans to take over Parthenon in place.

Two worlds, insect and human, begin to swirl out of control. When Larry shoots himself in his gitalong-er-leg, can the roosterroaches save him? Can they get word to Sharon?

Did you ever think an IBM Selectric Typewriter could be a thing of value?

What's a white mouse doing in Staymore?

Oh...and for all you doubters in Joshua Crust--read Cockroaches of Staymore to learn about the biggest and baddest of all roosterroaches, the Mockroach. He'll put you in mind of Uncle Screwtape. You know. The Uncle who wrote all those letters to his nephew.

While I was quite melancholy at the beginning of this quirky novel to find Staymore abandoned by the human characters I had come to love, I became enchanted by the world Harington created in the society of the roosterroaches. The little critters are more like us than any of us would care to admit. And Harington uses them to point out all the foibles, weaknesses, strengths, and the best of what it is to be human.

Cockroaches of Staymore could easily turn out to be my favorite of Harington's Staymore novels. This is a brillianty sharp work of humor and satire that skewers class structure, religion, politics--you name it. However, it's too early to tell this novel will be my favorite visit to Staymore. I have nine more journeys to make to that magical place. Harington has written the most original anthropomorphic work since Aesop's Fables


Biography of and Interview with Donald Harington by Edwin Arnold

Donald Harington and his Staymore Novels A Thirty Five Year Celebration, by Bob Rasher


The Bug, Dire Straits, 1992

It's a Bug's Life, Randy Newman

The Typewriter by Leroy Anderson



É inegável o impacto e transformação que a internet causou na sociedade. Os meios de negociação, trabalho e relacionamento mudaram, o e-comercio se tornou algo cotidiano, levando inclusive a diversas lojas deixarem de existir fisicamente para atuar apenas no mundo virtual.
As notícias e acontecimentos são divulgadas quase instantaneamente, não somente pelos canais de comunicação normal, mas por pessoas comuns que se tornam referência em um determinado assunto, o conhecimento é criado de forma colaborativa de diversas partes do mundo, e amplamente divulgadas através de redes sociais, blogs, foruns etc.

O próprio modo de relacionamento entre as pessoas mudou. Para muitas pessoas é mais comum conversar por redes sociais do que pessoalmente. O ensino a distância (EAD) aumentou consideravelmente, e existem até mesmo faculdades oferecendo curso de graduação pelo Facebook. Toda essa revolução é muito bem mostrada na séries de vídeos "Você sabia?" ("Did You Know?"), que pode ser encontrada no site do youtube.

A Internet e a Mobilidade

Toda essa mudança é alavancada pela rápida inovação tecnológica que passamos. Os meios de acessos e consumo de informação estão cada vez menores e portáteis, além de financeiramente mais acessíveis. As interfaces de uso estão cada vez mais intuitivas e mesmo pessoas com pouco conhecimento em informática são capazes de operar um tablet como o iPad®. De fato, o acesso ao ambiente virtual tem se tornado tão intuitivo e portátil que muitas vezes o mundo real e virtual se mesclam, e conceitos como realidade estendida (ou sexto sentido), ou presença remota se tornam cada vez mais comuns. Apenas como exemplo, existem robôs sendo vendidos para o mundo corporativo em que o funcionário manipula remotamente seu "avatar" da sua residência e interage com seus colegas de trabalho como se estivesse presente na empresa, participando de reuniões, debates, conversas no café etc.

Outro fator que influencia essa migração para mundo virtual são os crescentes problemas que os grandes centros urbanos geram, como violência e trânsito. A locomoção para estar fisicamente em um lugar tem se tornado cada vez mais complicada nos grandes centros. Em São Paulo, onde resido, é extremamente comum demorar entre 1 e 2 horas para fazer em horário de pico um trajeto que normalmente levaria de 10 a 20 minutos. Em épocas de chuvas e alagamentos, se torna inviável ir para alguns lugares da cidade, além da crescente violência, que inibem a circulação em determinados lugares e horários.

Oportunidades para a E-church (Igreja 2.0)

Todo esse contexto apresenta uma grande oportunidade para a igreja. Temos a oportunidade de romper a barreira física, e nos relacionarmos como igrejaidenpendentemente do local ou distância que estejamos uns dos outros. Existem diversas possibilidades na expansão virtual, mas vou citar alguns benefícios imediatos:

Pregação - 

Podemos levar o evangelho de forma muito mais rápida e barata para muito mais pessoas ao redor do mundo, além de ser uma forma mais segura em países fechados para o evangelho. Outra vantagem é que podemos utilizar diversos recursos para complementar o que está sendo pregado, mapas, fotos etc.

Acessibilidade - 

Pessoas que estariam incapacitadas para estarem em um culto fisicamente (sejam por 
problemas de saúde, impossibilidade de locomoção ou qualquer outro motivo) poderiam participar junto com a igreja, cantando, louvando, orando e ouvindo da palavra do Senhor.

Ensino - 

Através das práticas de EAD, poderíamos educar e formar pessoas em diversas partes do mundo, ou mesmo ensinar para uma quantidade de pessoas muito superior ao que seria possível colocar em uma sala física.

Jogos e realidade virtual - 

Apesar de ser um conceito relativamente novo, existe uma tendência para se utilizar jogos e mundos virtuais com o fim de educação, reunião e produção de conhecimento. Um exemplo disso é o CityOne, jogo lançado pela IBM para educar executivos e consultores sobre o seu programa Smarter Planet ("Um mundo mais inteligente"), e como aplicar as diversas ferramentas disponíveis pela empresa. Similarmente, poderíamos criar jogos educativos sobre a bíblia, valores éticos, princípios etc, atingindo principalmente o público infantil/adolescente.

Reuniões - 

Muitas vezes é preciso reunir pessoas para resolver determinados assuntos. Utilizando tecnologias de tele-presença, voip e outras tecnologias de comunicação, poderíamos resolver problemas com ações mais rápidas e precisas.


Muitos dos problemas que as pessoas passam poderiam ser atendidos por um conselheiro ou pastor via internet, seja por mensagens, ou por tele-presença. Isso representaria uma capacidade maior de atenção para mais pessoas, pois retira-se o tempo de transporte.

Não perder o foco: alcançar vidas

Antes de concluir esse post, gostaria de deixar claro que não estou propondo uma troca no nosso modo de se relacionar como igreja, e que a relação física deva ser deixado de lado. Pelo contrário, ainda não existe nada que substitua o "estar frente a frente". No entanto, não podemos descartar as oportunidades e benefícios que a tecnologia tem nos proporcionado. Vejo que ainda poucas igrejas têm investido algo nesse sentido, e engana-se quem pensa que somente as grandes igrejas conseguirão ter esse tipo de infraestrutura. Uma das poucas igrejas que conheço que disponibiliza seus cultos em tempo real via web, com a possibilidade de interação das pessoas, é uma igrejapequena.

Concluindo, vejo um futuro cheio de possibilidades e desafios pela frente, e como igreja, devemos nos preparar para aproveitá-los a favor do Reino. Não podemos ignorar as mudanças comportamentais que temos sofrido devido ao rápido avanço tecnológico, mas devemos entendê-los e aproveitá-los.
Espero que esse post tenha ajudado a você enxergar o investimento em tecnologias como o site da sua igreja como uma oportunidade de expansão do reino, e não como algo supérfluo. Incentive a sua igreja a utilizar redes sociais, fóruns, blogs, programas em Cloud Computing, faça do site de sua igreja um portal dinâmico e útil para as pessoas, invista em tecnologias de transmissão de vídeos ao vivo.
Tudo isto sem perder o foco que esse investimento é para alcançar vidas.


Autor: Renan Alencar de Carvalho

          IBM Domino 9 PDF Export Tool 1.0        
IBM Domino 9 PDF Export Tool to export Lotus Domino DXL Database.
          DISCO DURO        
En informática, un disco duro o disco rígido (en inglés Hard Disk Drive, HDD) es un dispositivo de almacenamiento de datos no volátil que emplea un sistema de grabación magnética para almacenar datos digitales. Se compone de uno o más platos o discos rígidos, unidos por un mismo eje que gira a gran velocidad dentro de una caja metálica sellada. Sobre cada plato, y en cada una de sus caras, se sitúa un cabezal de lectura/escritura que flota sobre una delgada lámina de aire generada por la rotación de los discos.
El primer disco duro fue inventado por IBM en 1956. A lo largo de los años, los discos duros han disminuido su precio al mismo tiempo que han multiplicado su capacidad, siendo la principal opción de almacenamiento secundario para PC desde su aparición en los años 60.[1] Los discos duros han mantenido su posición dominante gracias a los constantes incrementos en la densidad de grabación, que se ha mantenido a la par de las necesidades de almacenamiento secundario.[1]
Los tamaños también han variado mucho, desde los primeros discos IBM hasta los formatos estandarizados actualmente: 3,5" los modelos para PC y servidores, 2,5" los modelos para dispositivos portátiles. Todos se comunican con la computadora a través del controlador de disco, empleando una interfaz estandarizado. Los más comunes hoy día son IDE (también llamado ATA o PATA), SCSI (generalmente usado en servidores y estaciones de trabajo), Serial ATA y FC (empleado exclusivamente en servidores).
Para poder utilizar un disco duro, un sistema operativo debe aplicar un formato de bajo nivel que defina una o más particiones. La operación de formateo requiere el uso de una fracción del espacio disponible en el disco, que dependerá del formato empleado. Además, los fabricantes de discos duros, unidades de estado sólido y tarjetas flash miden la capacidad de los mismos usando prefijos SI, que emplean múltiplos de potencias de 1000 según la normativa IEC, en lugar de los prefijos binarios clásicos de la IEEE, que emplean múltiplos de potencias de 1024, y son los usados mayoritariamente por los sistemas operativos. Esto provoca que en algunos sistemas operativos sea representado como múltiplos 1024 o como 1000, y por tanto existan ligeros errores, por ejemplo un Disco duro de 500 GB, en algunos sistemas operativos sea representado como 465 GiB (Según la IEC Gibibyte, o Gigabyte binario, que son 1024 Mebibytes) y en otros como 500 GB.
Las unidades de estado sólido tienen el mismo uso que los discos duros y emplean las mismas interfaces, pero no están formadas por discos mecánicos, sino por memorias de circuitos integrados para almacenar la información. El uso de esta clase de dispositivos anteriormente se limitaba a las supercomputadoras, por su elevado precio, aunque hoy en día ya son muchísimo más asequibles para el mercado doméstico
Antiguo disco duro de IBM (modelo 62PC, «Piccolo»), de 64,5 MB, fabricado en 1979
Al principio los discos duros eran extraíbles, sin embargo, hoy en día típicamente vienen todos sellados (a excepción de un hueco de ventilación para filtrar e igualar la presión del aire).
El primer disco duro, aparecido en 1956, fue el Ramac I, presentado con la computadora IBM 350: pesaba una tonelada y su capacidad era de 5 MB. Más grande que una nevera actual, este disco duro trabajaba todavía con válvulas de vacío y requería una consola separada para su manejo.
Su gran mérito consistía en el que el tiempo requerido para el acceso era relativamente constante entre algunas posiciones de memoria, a diferencia de las cintas magnéticas, donde para encontrar una información dada, era necesario enrollar y desenrollar los carretes hasta encontrar el dato buscado, teniendo muy diferentes tiempos de acceso para cada posición.
La tecnología inicial aplicada a los discos duros era relativamente simple. Consistía en recubrir con material magnético un disco de metal que era formateado en pistas concéntricas, que luego eran divididas en sectores. El cabezal magnético codificaba información al magnetizar diminutas secciones del disco duro, empleando un código binario de «ceros» y «unos». Los bits o dígitos binarios así grabados pueden permanecer intactos años. Originalmente, cada bit tenía una disposición horizontal en la superficie magnética del disco, pero luego se descubrió cómo registrar la información de una manera más compacta.
El mérito del francés Albert Fert y al alemán Peter Grünberg (ambos premio Nobel de Física por sus contribuciones en el campo del almacenamiento magnético) fue el descubrimiento del fenómeno conocido como magnetorresistencia gigante, que permitió construir cabezales de lectura y grabación más sensibles, y compactar más los bits en la superficie del disco duro. De estos descubrimientos, realizados en forma independiente por estos investigadores, se desprendió un crecimiento espectacular en la capacidad de almacenamiento en los discos duros, que se elevó un 60% anual en la década de 1990.
En 1992, los discos duros de 3,5 pulgadas alojaban 250 Megabytes, mientras que 10 años después habían superado 40 Gigabytes (40000 Megabytes). En la actualidad, ya contamos en el uso cotidiano con discos duros de más de 3 terabytes (TB), (3000000 Megabytes)
En 2005 los primeros teléfonos móviles que incluían discos duros fueron presentados por Samsung y Nokia, aunque no tuvieron mucho éxito ya que las memorias flash los acabaron desplazando, sobre todo por asuntos de fragilidad y superioridad.

[editar] Características de un disco duro

Las características que se deben tener en cuenta en un disco duro son:
  • Tiempo medio de acceso: Tiempo medio que tarda la aguja en situarse en la pista y el sector deseado; es la suma del Tiempo medio de búsqueda (situarse en la pista), Tiempo de lectura/escritura y la Latencia media (situarse en el sector).
  • Tiempo medio de búsqueda: Tiempo medio que tarda la aguja en situarse en la pista deseada; es la mitad del tiempo empleado por la aguja en ir desde la pista más periférica hasta la más central del disco.
  • Tiempo de lectura/escritura: Tiempo medio que tarda el disco en leer o escribir nueva información: Depende de la cantidad de información que se quiere leer o escribir, el tamaño de bloque, el número de cabezales, el tiempo por vuelta y la cantidad de sectores por pista.
  • Latencia media: Tiempo medio que tarda la aguja en situarse en el sector deseado; es la mitad del tiempo empleado en una rotación completa del disco.
  • Velocidad de rotación: Revoluciones por minuto de los platos. A mayor velocidad de rotación, menor latencia media.
  • Tasa de transferencia: Velocidad a la que puede transferir la información a la computadora una vez la aguja está situada en la pista y sector correctos. Puede ser velocidad sostenida o de pico.
Otras características son:

[editar] Estructura física

Componentes de un disco duro. De izquierda a derecha, fila superior: tapa, carcasa, plato, eje; fila inferior: espuma aislante, circuito impreso de control, cabezal de lectura / escritura, actuador e imán, tornillos.
Interior de un disco duro; se aprecia la superficie de un plato y el cabezal de lectura/escritura retraído, a la izquierda.
Dentro de un disco duro hay uno o varios discos (de aluminio o cristal) concéntricos llamados platos (normalmente entre 2 y 4, aunque pueden ser hasta 6 ó 7 según el modelo), y que giran todos a la vez sobre el mismo eje, al que están unidos. El cabezal (dispositivo de lectura y escritura) está formado por un conjunto de brazos paralelos a los platos, alineados verticalmente y que también se desplazan de forma simultánea, en cuya punta están las cabezas de lectura/escritura. Por norma general hay una cabeza de lectura/escritura para cada superficie de cada plato. Los cabezales pueden moverse hacia el interior o el exterior de los platos, lo cual combinado con la rotación de los mismos permite que los cabezales puedan alcanzar cualquier posición de la superficie de los platos..
Cada plato posee dos caras, y es necesaria una cabeza de lectura/escritura para cada cara. Si se observa el esquema Cilindro-Cabeza-Sector de más abajo, a primera vista se ven 4 brazos, uno para cada plato. En realidad, cada uno de los brazos es doble, y contiene 2 cabezas: una para leer la cara superior del plato, y otra para leer la cara inferior. Por tanto, hay 8 cabezas para leer 4 platos, aunque por cuestiones comerciales, no siempre se usan todas las caras de los discos y existen discos duros con un número impar de cabezas, o con cabezas deshabilitadas. Las cabezas de lectura/escritura nunca tocan el disco, sino que pasan muy cerca (hasta a 3 nanómetros), debido a una finísima película de aire que se forma entre éstas y los platos cuando éstos giran (algunos discos incluyen un sistema que impide que los cabezales pasen por encima de los platos hasta que alcancen una velocidad de giro que garantice la formación de esta película). Si alguna de las cabezas llega a tocar una superficie de un plato, causaría muchos daños en él, rayándolo gravemente, debido a lo rápido que giran los platos (uno de 7.200 revoluciones por minuto se mueve a 129 km/h en el borde de un disco de 3,5 pulgadas).

[editar] Direccionamiento

Cilindro, Cabeza y Sector
Pista (A), Sector (B), Sector de una pista (C), Clúster (D)
Hay varios conceptos para referirse a zonas del disco:
  • Plato: cada uno de los discos que hay dentro del disco duro.
  • Cara: cada uno de los dos lados de un plato.
  • Cabeza: número de cabezales.
  • Pistas: una circunferencia dentro de una cara; la pista 0 está en el borde exterior.
  • Cilindro: conjunto de varias pistas; son todas las circunferencias que están alineadas verticalmente (una de cada cara).
  • Sector : cada una de las divisiones de una pista. El tamaño del sector no es fijo, siendo el estándar actual 512 bytes, aunque próximamente serán 4 KiB. Antiguamente el número de sectores por pista era fijo, lo cual desaprovechaba el espacio significativamente, ya que en las pistas exteriores pueden almacenarse más sectores que en las interiores. Así, apareció la tecnología ZBR (grabación de bits por zonas) que aumenta el número de sectores en las pistas exteriores, y utiliza más eficientemente el disco duro. Así las pistas se agrupan en zonas de pistas de igual cantidad de sectores. Cuanto más lejos del centro de cada plato se encuentra una zona, ésta contiene una mayor cantidad de sectores en sus pistas. Además mediante ZBR, cuando se leen sectores de cilindros más externos la tasa de transferencia de bits por segundo es mayor; por tener la misma velocidad angular que cilindros internos pero mayor cantidad de sectores.[3]
El primer sistema de direccionamiento que se usó fue el CHS (cilindro-cabeza-sector), ya que con estos tres valores se puede situar un dato cualquiera del disco. Más adelante se creó otro sistema más sencillo: LBA (direccionamiento lógico de bloques), que consiste en dividir el disco entero en sectores y asignar a cada uno un único número. Éste es el que actualmente se usa.

[editar] Tipos de conexión

Si hablamos de disco duro podemos citar los distintos tipos de conexión que poseen los mismos con la placa base, es decir pueden ser SATA, IDE, SCSI o SAS:
  • IDE: Integrated Device Electronics ("Dispositivo electrónico integrado") o ATA (Advanced Technology Attachment), controla los dispositivos de almacenamiento masivo de datos, como los discos duros y ATAPI (Advanced Technology Attachment Packet Interface) Hasta aproximadamente el 2004, el estándar principal por su versatilidad y asequibilidad. Son planos, anchos y alargados.
  • SCSI: Son interfaces preparadas para discos duros de gran capacidad de almacenamiento y velocidad de rotación. Se presentan bajo tres especificaciones: SCSI Estándar (Standard SCSI), SCSI Rápido (Fast SCSI) y SCSI Ancho-Rápido (Fast-Wide SCSI). Su tiempo medio de acceso puede llegar a 7 milisegundos y su velocidad de transmisión secuencial de información puede alcanzar teóricamente los 5 Mbps en los discos SCSI Estándares, los 10 Mbps en los discos SCSI Rápidos y los 20 Mbps en los discos SCSI Anchos-Rápidos (SCSI-2). Un controlador SCSI puede manejar hasta 7 discos duros SCSI (o 7 periféricos SCSI) con conexión tipo margarita (daisy-chain). A diferencia de los discos IDE, pueden trabajar asincrónicamente con relación al microprocesador, lo que posibilita una mayor velocidad de transferencia.
  • SATA (Serial ATA): El más novedoso de los estándares de conexión, utiliza un bus serie para la transmisión de datos. Notablemente más rápido y eficiente que IDE. Existen tres versiones, SATA 1 con velocidad de transferencia de hasta 150 MB/s (hoy día descatalogado), SATA 2 de hasta 300 MB/s, el más extendido en la actualidad; y por último SATA 3 de hasta 600 MB/s el cual se está empezando a hacer hueco en el mercado. Físicamente es mucho más pequeño y cómodo que los IDE, además de permitir conexión en caliente.
  • SAS (Serial Attached SCSI): Interfaz de transferencia de datos en serie, sucesor del SCSI paralelo, aunque sigue utilizando comandos SCSI para interaccionar con los dispositivos SAS. Aumenta la velocidad y permite la conexión y desconexión en caliente. Una de las principales características es que aumenta la velocidad de transferencia al aumentar el número de dispositivos conectados, es decir, puede gestionar una tasa de transferencia constante para cada dispositivo conectado, además de terminar con la limitación de 16 dispositivos existente en SCSI, es por ello que se vaticina que la tecnología SAS irá reemplazando a su predecesora SCSI. Además, el conector es el mismo que en la interfaz SATA y permite utilizar estos discos duros, para aplicaciones con menos necesidad de velocidad, ahorrando costes. Por lo tanto, las unidades SATA pueden ser utilizadas por controladoras SAS pero no a la inversa, una controladora SATA no reconoce discos SAS.

[editar] Factor de Forma

El más temprano "factor de forma" de los discos duros, heredó sus dimensiones de las disqueteras. Pueden ser montados en los mismos chasis y así los discos duros con factor de forma, pasaron a llamarse coloquialmente tipos FDD "floppy-disk drives" (en inglés).
La compatibilidad del "factor de forma" continua siendo de 3½ pulgadas (8,89 cm) incluso después de haber sacado otros tipos de disquetes con unas dimensiones más pequeñas.
  • 8 pulgadas: 241,3×117,5×362 mm (9,5×4,624×14,25 pulgadas).
    En 1979, Shugart Associates sacó el primer factor de forma compatible con los disco duros, SA1000, teniendo las mismas dimensiones y siendo compatible con la interfaz de 8 pulgadas de las disqueteras. Había dos versiones disponibles, la de la misma altura y la de la mitad (58,7mm).
  • 5,25 pulgadas: 146,1×41,4×203 mm (5,75×1,63×8 pulgadas). Este factor de forma es el primero usado por los discos duros de Seagate en 1980 con el mismo tamaño y altura máxima de los FDD de 5¼ pulgadas, por ejemplo: 82,5 mm máximo.
    Éste es dos veces tan alto como el factor de 8 pulgadas, que comúnmente se usa hoy; por ejemplo: 41,4 mm (1,64 pulgadas). La mayoría de los modelos de unidades ópticas (DVD/CD) de 120 mm usan el tamaño del factor de forma de media altura de 5¼, pero también para discos duros. El modelo Quantum Bigfoot es el último que se usó a finales de los 90'.
  • 3,5 pulgadas: 101,6×25,4×146 mm (4×1×5.75 pulgadas).
    Este factor de forma es el primero usado por los discos duros de Rodine que tienen el mismo tamaño que las disqueteras de 3½, 41,4 mm de altura. Hoy ha sido en gran parte remplazado por la línea "slim" de 25,4mm (1 pulgada), o "low-profile" que es usado en la mayoría de los discos duros.
  • 2,5 pulgadas: 69,85×9,5-15×100 mm (2,75×0,374-0,59×3,945 pulgadas).
    Este factor de forma se introdujo por PrairieTek en 1988 y no se corresponde con el tamaño de las lectoras de disquete. Este es frecuentemente usado por los discos duros de los equipos móviles (portátiles, reproductores de música, etc...) y en 2008 fue reemplazado por unidades de 3,5 pulgadas de la clase multiplataforma. Hoy en día la dominante de este factor de forma son las unidades para portátiles de 9,5 mm, pero las unidades de mayor capacidad tienen una altura de 12,5 mm.
  • 1,8 pulgadas: 54×8×71 mm.
    Este factor de forma se introdujo por Integral Peripherals en 1993 y se involucró con ATA-7 LIF con las dimensiones indicadas y su uso se incrementa en reproductores de audio digital y su subnotebook. La variante original posee de 2GB a 5GB y cabe en una ranura de expansión de tarjeta de ordenador personal. Son usados normalmente en iPods y discos duros basados en MP3.
  • 1 pulgadas: 42,8×5×36,4 mm.
    Este factor de forma se introdujo en 1999 por IBM y Microdrive, apto para los slots tipo 2 de compact flash, Samsung llama al mismo factor como 1,3 pulgadas.
  • 0,85 pulgadas: 24×5×32 mm.
    Toshiba anunció este factor de forma el 8 de enero de 2004 para usarse en móviles y aplicaciones similares, incluyendo SD/MMC slot compatible con disco duro optimizado para vídeo y almacenamiento para micromóviles de 4G. Toshiba actualmente vende versiones de 4GB (MK4001MTD) y 8GB (MK8003MTD) 5 y tienen el Record Guinness del disco duro más pequeño.
Los principales fabricantes suspendieron la investigación de nuevos productos para 1 pulgada (1,3 pulgadas) y 0,85 pulgadas en 2007, debido a la caída de precios de las memorias flash, aunque Samsung introdujo en el 2008 con el SpidPoint A1 otra unidad de 1,3 pulgadas.
El nombre de "pulgada" para los factores de forma normalmente no identifica ningún producto actual (son especificadas en milímetros para los factores de forma más recientes), pero estos indican el tamaño relativo del disco, para interés de la continuidad histórica.

[editar] Estructura lógica

Dentro del disco se encuentran:

[editar] Funcionamiento mecánico

Un disco duro suele tener:
  • Platos en donde se graban los datos.
  • Cabezal de lectura/escritura.
  • Motor que hace girar los platos.
  • Electroimán que mueve el cabezal.
  • Circuito electrónico de control, que incluye: interfaz con la computadora, memoria caché.
  • Bolsita desecante (gel de sílice) para evitar la humedad.
  • Caja, que ha de proteger de la suciedad, motivo por el cual suele traer algún filtro de aire.

[editar] Integridad

Debido a la distancia extremadamente pequeña entre los cabezales y la superficie del disco, cualquier contaminación de los cabezales de lectura/escritura o las fuentes puede dar lugar a un accidente en los cabezales, un fallo del disco en el que el cabezal raya la superficie de la fuente, a menudo moliendo la fina película magnética y causando la pérdida de datos. Estos accidentes pueden ser causados por un fallo electrónico, un repentino corte en el suministro eléctrico, golpes físicos, el desgaste, la corrosión o debido a que los cabezales o las fuentes sean de pobre fabricación.
Cabezal del disco duro
El eje del sistema del disco duro depende de la presión del aire dentro del recinto para sostener los cabezales y su correcta altura mientras el disco gira. Un disco duro requiere un cierto rango de presiones de aire para funcionar correctamente. La conexión al entorno exterior y la presión se produce a través de un pequeño agujero en el recinto (cerca de 0,5 mm de diámetro) normalmente con un filtro en su interior (filtro de respiración, ver abajo). Si la presión del aire es demasiado baja, entonces no hay suficiente impulso para el cabezal, que se acerca demasiado al disco, y se da el riesgo de fallos y pérdidas de datos. Son necesarios discos fabricados especialmente para operaciones de gran altitud, sobre 3.000 m. Hay que tener en cuenta que los aviones modernos tienen una cabina presurizada cuya presión interior equivale normalmente a una altitud de 2.600 m como máximo. Por lo tanto los discos duros ordinarios se pueden usar de manera segura en los vuelos. Los discos modernos incluyen sensores de temperatura y se ajustan a las condiciones del entorno. Los agujeros de ventilación se pueden ver en todos los discos (normalmente tienen una pegatina a su lado que advierte al usuario de no cubrir el agujero. El aire dentro del disco operativo está en constante movimiento siendo barrido por la fricción del plato. Este aire pasa a través de un filtro de recirculación interna para quitar cualquier contaminante que se hubiera quedado de su fabricación, alguna partícula o componente químico que de alguna forma hubiera entrado en el recinto, y cualquier partícula generada en una operación normal. Una humedad muy alta durante un periodo largo puede corroer los cabezales y los platos.
Cabezal de disco duro IBM sobre el plato del disco
Para los cabezales resistentes al magnetismo grandes (GMR) en particular, un incidente minoritario debido a la contaminación (que no se disipa la superficie magnética del disco) llega a dar lugar a un sobrecalentamiento temporal en el cabezal, debido a la fricción con la superficie del disco, y puede hacer que los datos no se puedan leer durante un periodo corto de tiempo hasta que la temperatura del cabezal se estabilice (también conocido como “aspereza térmica”, un problema que en parte puede ser tratado con el filtro electrónico apropiado de la señal de lectura).
Los componentes electrónicos del disco duro controlan el movimiento del accionador y la rotación del disco, y realiza lecturas y escrituras necesitadas por el controlador de disco. El firmware de los discos modernos es capaz de programar lecturas y escrituras de forma eficiente en la superficie de los discos y de reasignar sectores que hayan fallado.

[editar] Presente y futuro

Actualmente la nueva generación de discos duros utiliza la tecnología de grabación perpendicular (PMR), la cual permite mayor densidad de almacenamiento. También existen discos llamados "Ecológicos" (GP - Green Power), los cuales hacen un uso más eficiente de la energía.

[editar] Comparativa de Unidades de estado sólido y discos duros

Una unidad de estado sólido o SSD (acrónimo en inglés de solid-state drive) es un dispositivo de almacenamiento de datos que puede estar construido con memoria no volátil o con memoria volátil. Las no volatiles son unidades de estado sólido que como dispositivos electrónicos, están construidos en la actualidad con chips de memoria flash. No son discos, pero juegan el mismo papel a efectos prácticos aportando más ventajas que inconvenientes tecnológicos. Por ello se está empezando a vislumbrar en el mercado la posibilidad de que en el futuro ese tipo de unidades de estado sólido terminen sustituyendo al disco duro para implementar el manejo de memorias no volatiles en el campo de la ingeniería informática.
Esos soportes son muy rápidos ya que no tienen partes móviles y consumen menos energía. Todos esto les hace muy fiables y físicamente duraderos. Sin embargo su costo por GB es aún muy elevado respecto al mismo coste de GB en un formato de tecnología de Disco Duro siendo un índice muy importante cuando hablamos de las altas necesidades de almacenamiento que hoy se miden en orden de Terabytes.[4]
A pesar de ello la industria apuesta por este vía de solución tecnológica para el consumo doméstico[5] aunque se ha de considerar que estos sistemas han de ser integrados correctamente[6] tal y como se esta realizando en el campo de la alta computacion.[7] Unido a la reducción progresiva de costes quizás esa tecnología recorra el camino de aplicarse como método general de archivo de datos informáticos energéticamente respetuosos con el medio natural si optimiza su función lógica dentro de los sistemas operativos actuales.[8]
Los discos que no son discos: Las Unidades de estado sólido han sido categorizadas repetidas veces como "discos", cuando es totalmente incorrecto denominarlas así, puesto que a diferencia de sus predecesores, sus datos no se almacenan sobre superficies cilíndricas ni platos. Esta confusión conlleva habitualmente a creer que "SSD" significa Solid State Disk, en vez de Solid State Drive

[editar] Unidades híbridas

Las unidades híbridas son aquellas que combinan las ventajas de las unidades mecánicas convencionales con las de las unidades de estado sólido. Consisten en acoplar un conjunto de unidades de memoria flash dentro de la unidad mecánica, utilizando el área de estado sólido para el almacenamiento dinámico de datos de uso frecuente (determinado por el software de la unidad) y el área mecánica para el almacenamiento masivo de datos. Con esto se logra un rendimiento cercano al de unidades de estado sólido a un costo sustancialmente menor. En el mercado actual (2011), Seagate ofrece su modelo "Momentus XT" con esta tecnología.[9]

[editar] Fabricantes

Un Western Digital 3,5 pulgadas 250 GB SATA HDD.
Un Seagate 3,5 pulgadas 1 TB SATA HDD.
Los recursos tecnológicos y el saber hacer requeridos para el desarrollo y la producción de discos modernos implica que desde 2007, más del 98% de los discos duros del mundo son fabricados por un conjunto de grandes empresas: Seagate (que ahora es propietaria de Maxtor), Western Digital (propietaria de Hitachi, a la que a su vez fue propietaria de la antigua división de fabricación de discos de IBM) y Fujitsu, que sigue haciendo discos portátiles y discos de servidores, pero dejó de hacer discos para ordenadores de escritorio en 2001, y el resto lo vendió a Western Digital. Toshiba es uno de los principales fabricantes de discos duros para portátiles de 2,5 pulgadas y 1,8 pulgadas. TrekStor es un fabricante alemán que en 2009 tuvo problemas de insolvencia, pero que actualmente sigue en activo. ExcelStor es un pequeño fabricante chino de discos duros.
Decenas de ex-fabricantes de discos duros han terminado con sus empresas fusionadas o han cerrado sus divisiones de discos duros, a medida que la capacidad de los dispositivos y la demanda de los productos aumentó, los beneficios eran menores y el mercado sufrió un significativa consolidación a finales de los 80 y finales de los 90. La primera víctima en el mercado de los PC fue Computer Memories Inc.; después de un incidente con 20 MB defectuosos en discos en 1985, la reputación de CMI nunca se recuperó, y salieron del mercado de los discos duros en 1987. Ot