Clipper Program Conversion
All file attributes do not behave the same. To initiate a file into the process, you must explicitly specify the HIDDEN, SYSTEM, VOLUME and DIR attributes. However, if either no attribute, R/O, or ARCHIVE is implemented with a file, it does not matter which value is passed.
These rules for attribute handling are grounded in DOS, which compares the specified value with the actual file attributes in this way. Since in some circumstances this may lead to problems, the function allows you to switch on an additional EXACT ATTRIBUTE MATCHING. The functions in this chapter let you manipulate the drive mapping of workstations. New mappings can be executed, and existing mappings can be deleted. Furthermore, you can query information about current mappings.
The basic function of this group of functions is. Is used to create or delete an allocation.
All extended possibilities, such as search drives and fake roots, are supported. Drive mappings can be defined temporarily. Temporary mappings are deleted automatically at the end of an application. All of the CA-Clipper Tools functions are described in detail in this four-volume Reference Guide. To allow you to find the function you need in a particular situation, the Reference Guide is divided into chapters. Each chapter presents a group of functions that serve a particular purpose, such as date functions, database functions, or system functions. Each function is then described in detail on a separate page.
Appendixes provide keyboard tables, DOS error codes, and Novell network error codes. Each Netware file server maintains a database of users and resources available on the network. This special-purpose database is called the bindery. The bindery contains objects uniquely specified by an object name and an object type.
Possible object types are: users, user groups, print queues, or print servers. Each object has associated with it a number of properties that can be addressed with names and contain information about the object. For example, the property GROUPSI' MIN contains a list of user groups of which a user is a member.

Netware internally uses a high low sequence to store numeric values, which is contrary to the standard format of the 80x86 processor family (low high sequence). For efficiency reasons, the Netware functions of CA-Clipper Tools expect numeric values that are passed to Netware (object types or object IDs) in the high low sequence. However, this will not affect the practical work with the CA-Clipper Tools because symbolic constants are defined in the high-low format in the header file CTNNET.CH for the most important object types. This chapter contains some useful functions that do not belong in any other chapter of network functions. The functions and are not network functions, but these functions have been included in CA-Clipper Tools specifically for working with internet addresses that are returned or are required by functions such as.
The function is very useful if you are developing applications that access Netware 2.2 and Netware 3.11 servers because the behavior of some CA-Clipper Tools functions depends on the Netware version used (for example, ). Other functions are available for only one Netware version (for example, ). The data exchange between two workstations is interrupt controlled. Incoming data is copied in the background to a receiving buffer and can then be read with CA-Clipper Tools functions. As soon as the receiving buffer is full, incoming data is discarded. Outgoing data is copied to a sending buffer. While the CA-Clipper application continues, data is sent in the background from the sending buffer to the target address.
Interrupt controlled sending of data is necessary to avoid a long wait when you send large datasets. When you use IPX/SPX communication or one of two possible NetBIOS communication variants, the data is broken up into packets internally. The packet size depends on the protocol used (IPX: 546 bytes, SPX: 534 bytes, NetBIOS: 512 bytes).
The packets are sent sequentially. Normally, the packet structure of the transmitted data is transparent and does not affect the handling of the data. The receiving buffer of the target workstation receives the data in the same way that the data has been written to the sending buffer. When you use IPX/SPX protocols in addition to the data, a header that is included in each IPX or SPX packet can be transmitted to the receiving buffer. This header makes a number of packet specific functions possible. The tables 31.1 and 31.2 describe the structure of the IPX and SPX header. The header positions start with 1, corresponding to string functions.
IPX (Internetwork Packet Exchange Protocol) is a rudimentary protocol whose main advantage is that data can be sent to all waiting workstations in an internal network with only one call. However, IPX has no suitable handshake mechanism available that guarantees a successful delivery and a correct process of the sent data on the destination workstation.
(Related mechanisms can be implemented with the IPX functions of CA-Clipper Tools. However, these mechanisms are application specific and have no general validity.). Unlike SPX, which can be seen as real Point-To-Point communication protocol, the IPX communication is implemented as a pseudo Point-To- Point communication. When you use IPX, data for a specific workstation can be received by any workstation. However, a destination address is defined for interrupt controlled sending.
(This address can specify all workstations within an internal network.) The destination address is not defined by the protocol, so it is not necessary to specify the destination address each time you access the send buffer. When you use the SPX protocol, a connection between two workstations is established.

The IPX/SPX communication between two workstations is based on two sockets on each side of the connection that can be opened and closed, similar to files. Sockets are represented by numeric values between 1 and 65535. Some socket numbers are reserved either by the Netware operating system or by Novell for third party vendors.
The sockets between 16384 and 20480 (4000h - 5000h) are the dynamic sockets, meaning there are no reserved sockets in this range. A collision with socket numbers reserved by Netware can be avoided by using sockets out of that range. The NetBIOS communication functions can be used in all networks that are based on NetBIOS specifications. In CA-Clipper Tools, two kinds of NetBIOS communication have been implemented: NetBIOS datagram and NetBIOS session communication. The NetBIOS datagram communication can be compared to the IPX communication; a successful delivery of sent data on the destination workstation is not guaranteed by the protocol. Like IPX, the datagram connection can also be implemented as pseudo Point-To- Point communication between two workstations. Additionally, a workstation can communicate with a group of workstations or with all workstations in a network.
For the communication with other workstations in a NetBIOS network, NetBIOS names are used. A NetBIOS name can be up to 15 characters long and is case sensitive. NetBIOS differentiates between two kinds of names: station names that specify a workstation and are unique in a network, and group names that can be assigned to any number of stations. Each workstation contains a local name table with up to 20 NetBIOS names. A workstation can be addressed with each name in the name table. Once you have defined single or group names, messages can be sent easily to a station or a group of stations. With the Novell utility PRINTCON.EXE, print job definitions that contain the settings for a capture process can be created.
The functions in this chapter allow you to control the print job definitions of the current users and all other users on a file server. You can create new job definitions and query or modify settings in existing definitions. It is possible to read the settings of a job definition and to directly start a capture process ( ). This function call is equivalent to a call of the CAPTURE utility with the parameter /J=. Under Netware 3.x, control of network printers is no longer handled by a file server but by a print server. (However, as an NLM, the print server can also run on a file server.) Under Netware 2.2, a print server VAP is available. The functions of this chapter provide the ability to access the most important information and functions of a print server.
Clipper Program Conversion Online
For example, you can determine the status of a print server printer. In conjunction with the functions for Point To Point communication, a CA-Clipper application is able to emulate a remote printer (see sample program RPRINTER.PRG).
Using CA-Clipper Tools, you can use up to four serial ports simultaneously. You can create a sending and a receiving buffer of up to 64kB in size. The characters for the background transmission mode are placed in the sending buffer, while characters received through the port are stored using an interrupt handler. You can determine the number of characters in the receive buffer from your CA-Clipper program, and as many of the available characters as you like can be read. Additional special control functions exist for the sending buffer that give the governing program full control.
It is also possible to engage a software or hardware handshake that is performed completely in the background. As previously mentioned, CA-Clipper Tools functions support both a hardware and software handshake. As soon as the receiving buffer threatens to overflow by at least one page, a special handshake character is transmitted that tells the other side that no further data should be transmitted. Whether you implement the hardware or software handshake depends upon the type of data transmission. Hardware handshakes use physical port controls.
These port controls are usually RTS and CTS, so within the scope of CA-Clipper Tools functions, these control ports cannot be used for modem transmission. Modems are generally not able to reproduce port controls directly over the transmission route (i.e. Telephone connection). A software handshake must be implemented in such cases.
Ca Clipper For Windows 7
As previously mentioned, remote data transmission is, as a rule, implemented only through a software handshake. A significant disadvantage to this method is that the characters used for flow control, CHR (19) and Chr(17) can no longer appear in the original data. Because these characters appear in binary files, remote data transmission is not possible — transmission protocols must be used. You find XMODEM routines written in CA-Clipper in the example programs. Using the CA-Clipper Tools port functions and this example as a basis, other protocols can be developed fairly simply.
Never run as root anything on a linux box. As you should never run anything as administrator unless you are forced to do.on linux, you can use the command su to start a program under another user. Please read here THE LAST ANSWER:Imagine you have an user called 'harbour' and a script user harbour can execute:su -s /bin/sh -c 'startleto.sh myargs' harbourplease note that startleto.sh must be able to detach, I mean, it must exit and the leto server must be running as daemon (I didn't see the script). Never run as root anything on a linux box. As you should never run anything as administrator unless you are forced to do.on linux, you can use the command su to start a program under another user. Please read here THE LAST ANSWER:Imagine you have an user called 'harbour' and a script user harbour can execute:su -s /bin/sh -c 'startleto.sh myargs' harbourplease note that startleto.sh must be able to detach, I mean, it must exit and the leto server must be running as daemon (I didn't see the script).
// Determine the server type and loginUSE ( workdir + 'wsdata' ) NEW VIA 'DBFCDX' //Workstation setup table (local)IF! Empty( srvrType )DO CASECASE srvrType = 'L'// LetoDB ServermainPath:= '//' + allTrim( srvrname ) + ':' + iif( Empty( srvrport), '2812', allTrim( srvrport ) ) + '/'filePath:= mainPathIF letoConnect( mainPath ) -1Alert( 'Unable to connect to server.' )QUITENDIFREQUEST LETORDDSETDEFAULT( 'LETO' )CASE srvrType = 'N'// HBNetIO ServerIF! Empty( srvrname )IF!
Netioconnect( alltrim( srvrname ), iif( Empty( srvrport), '2941', allTrim( srvrport ) ) )Alert( 'Unable to connect to server.' )QUITENDIFmainPath:= 'net:' // -rootdirfilePath:= '.' ENDIFOTHERWISEENDCASEcurrSrvrType:= srvrType. Yep, Ashaddendum, checked for own interest:an '/' will be internally added, if the given root directory for HBNETIO server end.not. with a path seperator, e.g.
'/' for Linux - so your first version was totally correct.-There is wilfully no way to retrieve this server root directory.One workaround you just found, like another user earlier showed it to me:put the hbnetio executable into the data directory on server.Then we can use: netioFuncExec( 'HBDIRBASE' ) to retrieve the path of the executable, in that case the server root directory.But in this case we don't need to know the absolute path on server, like we then can use just the filename without any path prefix,for opening a DBF with RPC commands for e.g. Creating an index.This hbdbExists('net. ') seem the most flexible way, as you then easily can change that directory.best regardsRolfelch10.02.14 4:55. Maybe this help:ShareTables = 0 - if 0 (default, this mode server was the only from thestart of a letodb project), the letodb opens alltables in an exclusive mode, what allows to increasethe speed. If 1 (new mode, added since June 11, 2009),tables are opened in the same mode as clientapplications opens them, exclusive or shared, whatallows the letodb to work in coexistence with othertypes of applications.-You received this message because you are subscribed to the GoogleGroups 'Harbour Users' group.Unsubscribe:Web:-You received this message because you are subscribed to the Google Groups 'Harbour Users' group.To unsubscribe from this group and stop receiving emails from it, send an email to.For more options, visit.elch10.02.14 11:51. Exactly - as i already wrote.one 'Exception SIGSEGV' error arises in Linux. Windows will only report in such cases: 'app have a 'problem.'
Can find in LetoDB log file:-0-0-0.and if i search LetoDB source code, it seems that an already EXCLUSIVE used DBF is correct detected - but then something unplanned happens.So this problem perhaps may be relative easy corrected by the developers of LetoDB.-But my other problem report about the index files makes me real headaches, that sounds like the k.o. For LetoDB.As it is certainly not my task to fiddle around in server code;-)best regardsRolfAsh13.02.14 7:03. Hi,that looks like NO bug! Hi,i pushed an error description to the place shown by Nenad.Meanwhile i looked a bit deeper into the source code: it could be 'only' a bug.The basic structures for maintaining multiple index files seem to be there.I just guess, it will also happen when someone tries to open multiple CDX index files.But with this USE EXCLUSE sigsegv crash, plus my described error - so at least two very basic and heavy problems just during first testing times:i have no so good feeling.I will focus first on HBNETIO - and all work to exchange the 'server' is already done.best regardsRolfAsh15.02.14 7:47. I share your concerns. Of course, I'd prefer to use rock-solid product like NetIO, but unfortunately it does not solve my main problem.As I see on, Kresin and Pavel still working on LetoDB, just completed the php client and corrected some bugs, so I think with a little patients LetoDB can be quite usable product.Regards, NBFrom: mailto: On Behalf Of AshSent: Saturday, February 15, 2014 4:47 PMTo:Subject: Re: harbour-users Re: Conversion of an application from Clipper to Harbour + NetIO. I'm not sure that this test shows the true situation.
In my case, the improvement is clearly evident in the real application. For example, I have a report that runs 4 minutes 15 seconds over the LAN. When switch to LetoDB, time is 13 sec! Without any code change!But: I do not even cross my mind to use letoDB or any other product before being 100% sure that it will work properly!NBFrom: mailto: On Behalf Of elchSent: Saturday, February 15, 2014 9:09 PMTo:Subject: Re: harbour-users Re: Conversion of an application from Clipper to Harbour + NetIO. DBGOTOPDBSEEK( fixed value ) // results for both at same recnorepeated RLOCK with 0.01s delay until lockedREPLACE one valueDBUNLOCKHere also LetoDB wins, let me estimate: 25% better time.All together nothing which blows me from the chair.What is incredible (10? Times) fast with LetoDB:DBSKIPAnd here maybe is the secret, why a DBSetfilter is so fast with LetoDB, like with an active filter must be much 'hidden rows' skipped.#Everybody has it own needs, i need for my main project 24/7 reliability.Apps won't not even stop for daily backup, only short time release and exact restoring of DBF areas will be done in that moment.In such a scenario a single workstation may temporary fail, but not the server.fperillo15.02.14 15:00.
Please try if you want. I think that the new file may be created in no more than 2 seconds. 50000 records. Unless they are very very very long records.Client: please create a file from DB001 to a temp file with the following filterServer: use DB001 SHARED; SET FILTER TO; COPY TO temp; RETURN 'temp' to clientClient: use letodb:temp; browAs I said, this is a COPY of the data, if someone else changes some data, it will not be updated. It is a snapshot, it is exactly how it would work in a SQL world, but not in a shared dbf world.Of course, if the user wants to browse the whole database, you can just open DB001. I'm quite sure you may have better response time with proper netio/rpc setup. Usually a report has some input values and outputs a.
Clipper Programming
Printer-on-paper report? Whatever you create you can do from the server.I use harupdf to create PDF reports. Almost all my reports coding is done in two functions, the first to gather filter values from the user, the second uses the parameters passed from the first function to create the report.
Some reports are long but moving the second function into the server can shrink the time a lot. In fact, this system is already in use:)If the user selects lower part of the table, then I create a temporary table that contains only a few field from the main table and RecNo in main table.
Then I browse temporary table, using the RecNo find the appropriate record and receive data as MainTable- Field. It is a solid system, but I'm not completely satisfied, because the filter condition contains a function that is very slow (I did not mention it because of the simplicity) and execution may take a while, even on the server.Regards, NBFrom: mailto: On Behalf Of Francesco PerilloSent: Sunday, February 16, 2014 12:01 AMTo:Subject: Re: harbour-users Re: Conversion of an application from Clipper to Harbour + NetIO. Ok, I'll explain in more detail. This table is a list of items, and filtered by items type. To display I use TBrowse class and filter is executed in one procedure. Somwhere in code exist this piece:CASE Ch KALTFSetFilterThis happens in exactly 56 very different procedures, but table of items is the same. Some procedures are changing data, some are read-only, in some user define columns.
In this procedures there is thousands of different functions and no way to create a one procedure that does everything. Here's view on one of them:In some tables, the user can edit the information and more importantly, some information may change by other users (for example, the price and quantity of an item)!
Number of records is difficult to estimate. We have more than 7.000 users and each of them has its own items and types of items. These tables may be from 20 to 200,000+ items. In the selection is even worse: the user can choose one and 49.999 records from 50.000.It would be the perfect solution to stay within SetFilter function, because changing all the procedures is very big task. This is the reason ((+few thousand reports)) why I'm looking for a solution without code change. 'Universal' data server is certainly not an ideal solution, but it gives me the most with no change of source code.I'm sorry to bother you, I hope this discussion will be helpful to someone else:)Regards, NBFrom: mailto: On Behalf Of Francesco PerilloSent: Sunday, February 16, 2014 12:39 AMTo:Subject: Re: harbour-users Re: Conversion of an application from Clipper to Harbour + NetIO. Let's see if I understand: in 56 different source code files you display a TBrowse with a list of products.
In each of these 56 you may need to list the products as a readonly, in some you must be able to edit, and you may need to have the values updated if some other user updates the value (of course you have to refresh the rows). Since there may be more than 200000 records you must be able somehow to filter that list and show only a subset of the records. For example only the products of a producer (that may reduce the rows a lot) or all the products that have the letter A in their name (and there may be 100000+).
Since the filter is 'Query by example' style you can't easily apply optimization and SET FILTER TO. Is the quickest and easiest way for the programmer. Sorry, English is definitely not my strongest side:)Our users is 2.232 companies with a total of 7,292 workstations.

Each company has its own database and its table of items that typically is 1-2000 records, but sometimes it can be 50.000, 200,000 or more records.The concept you fully understand. But filter can not be 'X letter in the name' - let's say you can only filter by group of type of items.I've done some experiments with the filter in letodb. For example, this command is executed very efficiently on server side:SET FILTER TO At (I2Bin(Artikli-avrs), 'AF 0F AA.'
) 0I first convert array of type ID's in the string 'AF 0F AA.' , then form the query and finaly send all to Leto. I do not expect a total optimization, it is sufficient to me to execute the query on the server side.I'll send some piece of code when I came across a particular problem - there's no point now, it's too complicated.Very thanks for your efforts and valuable information. I'll keep in mind your offer.Regards, NBFrom: mailto: On Behalf Of Francesco PerilloSent: Sunday, February 16, 2014 10:39 PMTo:Subject: Re: harbour-users Re: Conversion of an application from Clipper to Harbour + NetIO. On Sat, 15 Feb 2014, Ash wrote:Hi, When building indexes via RPC in NetIO, the network traffic is reduced by half, however - a fair advantage.Via RPC EVERYTHING is done on the server side. Only request tofunction call is sent from the client to the server and then finalresult is sent from the server to the client.The cost is static and does not depend on table size at all.Your information wrongly suggests that there is any network trafficduring indexing via RPC in NETIO.
It's false information. The wholeoperation is done on the server side only without any network calls.best regards,PrzemekAsh17.02.14 4:44. On Mon, 17 Feb 2014, Ash wrote:Hi, I believe I have found the reason for the network traffic during the indexing process. I use /data/accounts/comp folder on my Linux server for NetIO testing. This folder is also being shared via Samba and is mapped as z: drive when I logon to my workstation - easier to move files around. The network traffic during the NetIO test was the chatter between Windows and Samba. When I ran the same test without the z: drive, there was no network traffic.Yes, it explains the network traffic problem.Anyhow in such case you should should also repeat you tests for pure NETIOfile access because the configuration you used doubled number of networkmessages.best regards,Przemekelch17.02.14 7:26.
Hi,I think that you missed the configuration details.The indexes were stored on other computer then HBNETIO serverso HBNETIO server was receiving requests from the client andaccess files on other server by SMB protocol.I do not know why you may need such configuration. Just simplyaccess file directly on other server from client installingHBNETIO on this server.I do not see anything what can be changed in HBNETIO.If I'm missing sth please let me know what you need.best regards,PrzemekAsh17.02.14 10:26. Okay!, Ashrespect you for your experience, and more for about a million source rows of Nenad:WoW: i choosed a different way: as my main application is for all potential users the same.# because more than the half of my 170K apps' lines is my own library, ever will be needed for all# pushing the needs of i.e. 50 workstations onto one single server, i would need a 'full-grown mainframe' bastard;-)So all workstations are planned to use HBNETIO for DBF access, as it seem faster than Samba.And all stations will have the option of RPC remote execution in the backhand for 'special forces';-),where such is very rarely needed.-Further i have some own additional tools around that, like my own database management:this utility will run on the server itself without HBNETIO or else, but with local impressive Harbour fast access to the data.This tool is since long responsible to maintain (i.e. PACK / update DBF structures / RARE! Reindex if needed ) and all needed data is stored in 3 DBFs I've create decades ago this tool, as we actually talking of about 80 DBF and 200+ NTX.( sure!! Not to boast you or any else, only FYI, to better imagine about my worries.
)very best regardsRolfAsh17.02.14 18:13. Hello,thanks for the interest to LetoDb.The current working repository is the Sourceforge CVS, rel-1-mt branch, it isn't outdated, the last update was today ( the information in the Sourceforge's letodb main page isn't correct - that's result of a bug in Sourceforge software ).You may use the following console command to download it:cvs checkout -r rel-1-mt -P letodbThe github repository is my personal, I use it for testing purposes.Regards, Alexander.I usedcvs -d:pserver:anonymous@letodb.cvs.sourceforge.net:/cvsroot/letodb checkout -r rel-1-mt letodband work fineRegards.