Run Java And J2me ?


fbiggod

Still Fresh
Joined
Mar 26, 2009
Messages
2
Running java games for mobile run already created it, this also attract many new developers.

Run java and j2me ?


ertert7897337.jpg
 
Last edited by a moderator:
Do you mean that having a J2ME VM for Pandora will let you run J2ME programs? I don't think it will attract many new developers, though.
 
Last edited by a moderator:
Scala, my favourite language of all time and space (that I know of, at least) runs on JVMs so I sure hope there will be a JVM available. I could then take care of porting lwjgl etc, that shouldn't be a problem, since the drawing code in lwjgl at least is very isolated.

Too many people talk of porting CPython etc... Don't really know why someone would prefer Python to something like Scala so to me the whole discussion seems dumb, and more focus should be lain on JVMs.
 
'dflemstr' said:
Too many people talk of porting CPython etc... Don't really know why someone would prefer Python to something like Scala so to me the whole discussion seems dumb, and more focus should be lain on JVMs.
:D I'm gonna have to give you a good, thorough lesson some day on dynamic languages and why they're cool, mate. Here's a preview.
 
Last edited by a moderator:
Hey, why not make that day today? :p

Not horribly off-topic, so go for it. The main reasons why I think that dynamic languages are the root of all evil (warning: this section might go ad absurdum), are:

- Type checking enables you to catch errors at compile-time instead of in the middle of debugging (if it is done right and not like in C++ or Java). Dynamic languages obviously lack this. Scala (my muster language) does this very well by having type matching. I have never ever had a runtime error in a Scala program.

- IDE-parsing, gathering meta-data and the like becomes much easier and less CPU-intensive in non-dynamic languages because the IDE/tool you are using doesn't have to parse the whole code just to enumerate a class's members (for example).

- Typed languages enable you to generate much faster code when compiled, because you can optimize things away by limiting the computatuion required to that required by a specific type. An (rather bad, but spontaneous) example of this are the Java 'basic types' like ints etc, that aren't objects and therefore don't need an object heap to operate, speeding things up. Then, look at Erlang if you need more convincing.

- Typed languages increase secuirty. You can easily filter out unwanted types. In Python, for example; what if your function takes a string as an argument, but someone instead gives it a class closely resembling a string (with the same methods etc) and therefore completely breaks your function? Reliable?

- Typed languages enable you to do some pretty astounding stuff, like pattern matching, implicit methods, proper lambda functions etc. I can elaborate more on this if asked.

So, lets hear it, why are dynamic languages that much better?

EDIT: currently looking at that video of yours
 
Last edited by a moderator:
Dynamic languages are better for prototyping because you don't have to compile, and supposedly the errors are much better since the interpreter acts as a debugger.

Unfortunately, it certainly consumes as much power as a debugger, and I really don't like the idea of variables coming up at random in my code. If I spell a variable's name wrong, I expect something to say "What the hell is this?", not assign it a 0 and fail silently.
 
Last edited by a moderator:
'lulzfish' said:
Unfortunately, it certainly consumes as much power as a debugger, and I really don't like the idea of variables coming up at random in my code. If I spell a variable's name wrong, I expect something to say "What the hell is this?", not assign it a 0 and fail silently.
Dynamic languages do that. Don't take JavaScript as an example of good dynamic language design. It has stuff in it ranging from silly to brilliant.

But yes, they do tend to have slower runtimes. Enough development supposedly solves that, it did for various lisps. It just doesn't matter THAT much.

I think we've been offtopic enough :) Yes, there will be java. And yes, there will be a way to run MIDP 2.0 games, but it would be less than ideal.
 
Last edited by a moderator:
lulzfish said:
Dynamic languages are better for prototyping because you don't have to compile, and supposedly the errors are much better since the interpreter acts as a debugger.

A dynamic language can also be compiled, and a typed language can also be interpreted. This is not what we're talking about here :p

EDIT: looked through almost all of the video, nothing in JS that doesn't exist in Scala, too.
 
Last edited by a moderator:
'dflemstr' said:
- Type checking enables you to catch errors at compile-time instead of in the middle of debugging (if it is done right and not like in C++ or Java). Dynamic languages obviously lack this. Scala (my muster language) does this very well by having type matching. I have never ever had a runtime error in a Scala program.

- IDE-parsing, gathering meta-data and the like becomes much easier and less CPU-intensive in non-dynamic languages because the IDE/tool you are using doesn't have to parse the whole code just to enumerate a class's members (for example).

- Typed languages enable you to generate much faster code when compiled, because you can optimize things away by limiting the computatuion required to that required by a specific type. An (rather bad, but spontaneous) example of this are the Java 'basic types' like ints etc, that aren't objects and therefore don't need an object heap to operate, speeding things up. Then, look at Erlang if you need more convincing.

- Typed languages increase secuirty. You can easily filter out unwanted types. In Python, for example; what if your function takes a string as an argument, but someone instead gives it a class closely resembling a string (with the same methods etc) and therefore completely breaks your function? Reliable?
Python for one uses what's called "duck typing". In short it means that if you create a class that is close enough to another class (in essence similar public interface etc), that you could use them interchangeably in a context, you can. If I want an object to quack and it can, it will. This actually allows very flexible code. Think implicitly defined interfaces. A function doesn't actually need a variable of a specific type, it needs something that can do certain things.

Reliability is relative. If you absolutely need something to work in certain way, you'll have to ditch class inheritance as well (on many languages at least), because child classes can often override their parent's methods. You can check if the parameters are of a specific type, but only if you need to. Python gives you that freedom.

'dflemstr' said:
- Typed languages enable you to do some pretty astounding stuff, like pattern matching, implicit methods, proper lambda functions etc. I can elaborate more on this if asked.
I would like to know more on what you mean by pattern matching and "proper" lambda functions.
 
Last edited by a moderator:
'dflemstr' said:
Hey, why not make that day today? :p

Not horribly off-topic, so go for it. The main reasons why I think that dynamic languages are the root of all evil (warning: this section might go ad absurdum), are:

1. Type checking enables you to catch errors at compile-time instead of in the middle of debugging (if it is done right and not like in C++ or Java). Dynamic languages obviously lack this. Scala (my muster language) does this very well by having type matching. I have never ever had a runtime error in a Scala program.

2. IDE-parsing, gathering meta-data and the like becomes much easier and less CPU-intensive in non-dynamic languages because the IDE/tool you are using doesn't have to parse the whole code just to enumerate a class's members (for example).

3. Typed languages enable you to generate much faster code when compiled, because you can optimize things away by limiting the computatuion required to that required by a specific type. An (rather bad, but spontaneous) example of this are the Java 'basic types' like ints etc, that aren't objects and therefore don't need an object heap to operate, speeding things up. Then, look at Erlang if you need more convincing.

4. Typed languages increase secuirty. You can easily filter out unwanted types. In Python, for example; what if your function takes a string as an argument, but someone instead gives it a class closely resembling a string (with the same methods etc) and therefore completely breaks your function? Reliable?

5. Typed languages enable you to do some pretty astounding stuff, like pattern matching, implicit methods, proper lambda functions etc. I can elaborate more on this if asked.

So, lets hear it, why are dynamic languages that much better?
1. Type safety only catches some errors, even in Scala. You still have to do testing, and lots of it. You will catch all the errors your compiler would throw by running your unit tests. It's usually even faster.

2. I don't get this one. You get way less data about stuff by parsing it than by eval-ing it and the speed of both is about the same.
Are you talking about monkey-patching? Whoever does that in their own program deserves having the IDE ignore their added attributes. It's good practice to put all the stuff a class has in the class definition. It's very powerful and useful that you can monkey-patch stuff easily, but that doesn't mean you should do so all the time.

3. This has some truth to it. Traditional techniques of making languages fast do not work very well for most dynamic languages. JITs help, but not by much. V8 for example, just touches on the kind optimisations can be done at runtime, when you know more about your objects than just their type. Also look at the various lisps (some are ridiculously fast).

Erlang is a dynamic language. And it's slow, it's not a good example for this (other reasons for being slow, I know). Having non-heap entities has no relation to the dynamic-ness of a language.

In general, the theoretical speedup possible with dynamic languages is much higher than with statically typed languages.


4. No they don't. Yes you can, it's just optional.

5. Proper lambda? Are you talking about python's expression-only lambda? That's simply a design choice. You can define functions anywhere, if you do need lambda-like stuff. Look at how decorators work. Also look at Ruby.

'dflemstr' said:
EDIT: looked through almost all of the video, nothing in JS that doesn't exist in Scala, too.
I know. The point was that a bad dynamic language has enough good stuff in it that's it's almost pleasant if you stay away from the bad parts. And that has a lot to do with being dynamic (and scheme-ish).

I think you're confusing dynamic types with weak types.
C has static, weak types.
Java has static, not so weak types.
JavaScript has dynamic, weak types. (this combination can be unsafe)
Python has dynamic, strong types. (can't do "bla"+2, only "bla"+int(2) )
Ruby has dynamic, almost strong types. (a bit loose-er and less consistent than python)
Scala has mostly static, strong types.
 
Last edited by a moderator:
sindbad said:
1. Type safety only catches some errors, even in Scala. You still have to do testing, and lots of it. You will catch all the errors your compiler would throw by running your unit tests. It's usually even faster.


With type-checking, it's impossible to pass objects of the wrong type to a function. This counts for something. And Scala uses a very modular system in that a type (aka class) almost always only can be used for one specific purpose (modularity and reusability is there of course, through mixins and inheritance/class requirements etc), so alot of errors typically noticed in C or even Java at runtime can be caught at compile-time. And remember that most of the time, you don't use only parameter lists for funtion calls in Scala, but "match"es, too. So, you can limit the data you want to be "a list of integers, where the first element of the list is even, and the following elements are spread out like a fibonacci series" and have the compiler check that at compile-time (most don't since it takes such a bloody long time for infinitely long types ;) , but it can be done).
sindbad said:
2. I don't get this one. You get way less data about stuff by parsing it than by eval-ing it and the speed of both is about the same.
OK, sorry if I am using the wrong terms for some things. I'm not a CS major specialized on dynamic languages :p
What I ment was that in Python, you do not declare variables before using them (As in most dynamic languages), so the IDE/tool has to scan through your code to find the assignments you have done, and it becomes difficult to generate a cass member list, and similar things. There are various related problems to this.
sindbad said:
3. This has some truth to it. Traditional techniques of making languages fast do not work very well for most dynamic languages.


Depends on what optimization techniques you mean. In most typed languages, you can optimize code immensly due to the facts that:
1. Types and type signatures can't change after compile, so you can optimize the types without breaking the code.
Imagine, for example, that we have some typed language that uses stacks extensively. Then, we compile some code in this language on a machine that supports serious stack optimizations. Now, in dynamic languages, you can only optimize the "Stack"-class or similar to increase performance, while in typed languages, you can optimize functions that use the Stack-class, aswell, because they have defined parameter lists and won't ever get parameters that don't fit into the optimization model and therefore would break the code.
2. You can store various data structures better in memory due to the fact that you know exactly how big they are since you have their types.
3. You can implement matching patterns (à some functional languages) that can be optimized heavily by caching type signatures, while in dynamic languages you would have to hash a type every time to perform a match-operation.
sindbad said:
Erlang is a dynamic language. And it's slow, it's not a good example for this (other reasons for being slow, I know).


Don't now why I thought of Erlang :p just first thing that came to my mind for some reason...
Primary reason was that I have seen an Erlang compiler that generates separate function signatures for every usage of a function internally at compile-time (so if you have a function: "fac(N) -> N * fac(N-1).", then a method signature is generated internally at compile optimized for numbers, since that's the only thing that fit the function body)

And there's no solid definition of what a dynamic language is; it is generally defined as a language that performs alot of operations during runtime instead of during compilation, and generally is self-modifying in some way (dynamic types, self-modifying code etc).
sindbad said:
4. No they don't. Yes you can, it's just optional.
What is optional? Didn't understand your answer. My example that I gave seems plausible, to me.

sindbad said:
5. Proper lambda? Are you talking about python's expression-only lambda? That's simply a design choice. You can define functions anywhere, if you do need lambda-like stuff. Look at how decorators work. Also look at Ruby.

I mean that you can have function types that are pure data types. Like this scala code:
CODE

val myFunc = (x: Int, y: Int) => x + y - x * y
val anotherFunc = myFunc(_, 2) //here, a new function is created that takes 1 argument of type int and calls myFunc(_, 2) when called, where _ = its argument

println(myFunc(2, 3))
println(anotherFunc(5))
Or, another piece of code; this can be very error-prone and inefficient in Python with lambdas:
CODE

def someFunction(x: Int, f: (Int, Int) => Int): Int = {
f(x, x) //f is of type "function that takes 2 ints and returns one int". It's called here, andit's return value is returned.
}



And no, I'm not confusing anything ;)
A type in a language, to be called 'strong', should have a defined set of operations that can be done with it, and a defined set of cases it can be used in. In Scala, everything is strong down to the bone.
A statement like "3 + 2" actually evaluates to "(3).+(2)". There's a class called Int that has a method called "+" that has many overloads for various types that you can add to it. This is expressed in source-code and not part of the syntax. You could theoretically extend the Int class and override it's operator if it weren't sealed.
Then, when you make a method of a class in Scala (or a first-level function), that method also gains a type; for example "def function(x: Int) = ..." would have the type "Function1[Int]", aka a 1-parameter function with the type parameter Int, meaning that it's first parameter must be of type Int. You can store this in a variable etc and use it as data. This includes all functions, even the native ones. A function, since it has a type, can of course be extended by a class or another function...
Then, there are no "syntax functions" in Scala like there is in Python. Like, "print" for example.
This kind of type-strength just hasn't been seen so far in dynamic languages, including Python, afaik. Or could you please give me the function signature of "print" in Python? :p
 
Last edited by a moderator:
[quote name='dflemstr' date='Apr 2 2009, 10:01 PM' post='716052']
[quote name='sindbad' post='715938' date='Apr 2 2009, 03:05 PM']
1. Type safety only catches some errors, even in Scala. You still have to do testing, and lots of it. You will catch all the errors your compiler would throw by running your unit tests. It's usually even faster.[/quote]
With type-checking, it's impossible to pass objects of the wrong type to a function. This counts for something. And Scala uses a very modular system in that a type (aka class) almost always only can be used for one specific purpose (modularity and reusability is there of course, through mixins and inheritance/class requirements etc), so alot of errors typically noticed in C or even Java at runtime can be caught at compile-time. And remember that most of the time, you don't use only parameter lists for funtion calls in Scala, but "match"es, too. So, you can limit the data you want to be "a list of integers, where the first element of the list is even, and the following elements are spread out like a fibonacci series" and have the compiler check that at compile-time (most don't since it takes such a bloody long time for infinitely long types ;) , but it can be done).
[/quote]
You're confusing type safety with pattern matching. I know they can be related, but that's not the point. Look at Erlang :p

[quote name='dflemstr' date='Apr 2 2009, 10:01 PM' post='716052']
[quote name='sindbad' post='715938' date='Apr 2 2009, 03:05 PM']
2. I don't get this one. You get way less data about stuff by parsing it than by eval-ing it and the speed of both is about the same.[/quote]OK, sorry if I am using the wrong terms for some things. I'm not a CS major specialized on dynamic languages :p
What I ment was that in Python, you do not declare variables before using them (As in most dynamic languages), so the IDE/tool has to scan through your code to find the assignments you have done, and it becomes difficult to generate a cass member list, and similar things. There are various related problems to this.[/quote]
I really haven't noticed problems while using IDEs, so it mustn't be that hard. Again look at lisps, particularly PLT Scheme. While there may be some while building IDEs, there is generally less of a need for them anyway.

[quote name='dflemstr' date='Apr 2 2009, 10:01 PM' post='716052']
[quote name='sindbad' post='715938' date='Apr 2 2009, 03:05 PM']
3. This has some truth to it. Traditional techniques of making languages fast do not work very well for most dynamic languages. [/quote]
Depends on what optimization techniques you mean. In most typed languages, you can optimize code immensly due to the facts that:
1. Types and type signatures can't change after compile, so you can optimize the types without breaking the code.
Imagine, for example, that we have some typed language that uses stacks extensively. Then, we compile some code in this language on a machine that supports serious stack optimizations. Now, in dynamic languages, you can only optimize the "Stack"-class or similar to increase performance, while in typed languages, you can optimize functions that use the Stack-class, aswell, because they have defined parameter lists and won't ever get parameters that don't fit into the optimization model and therefore would break the code.
2. You can store various data structures better in memory due to the fact that you know exactly how big they are since you have their types.
3. You can implement matching patterns (à some functional languages) that can be optimized heavily by caching type signatures, while in dynamic languages you would have to hash a type every time to perform a match-operation.
[/quote]
1. It's quite effective in most static languages, but even there you find problems.
2. Inheritance, polymorphism, interfaces and a few other tricks can really screw this up. It's only true in C.
3. I'm not familiar enough with the implementations of functional languages. I do know that lisp and Erlang are dynamic languages and they handle this just fine. And fast.

Really, types and signatures are one of the many ways to optimise code for speed and memory. It's not even the lowest hanging fruit, a good GC usually has a larger impact.

[quote name='dflemstr' date='Apr 2 2009, 10:01 PM' post='716052']
[quote name='sindbad' post='715938' date='Apr 2 2009, 03:05 PM']
Erlang is a dynamic language. And it's slow, it's not a good example for this (other reasons for being slow, I know). [/quote]
Don't now why I thought of Erlang :p just first thing that came to my mind for some reason...
Primary reason was that I have seen an Erlang compiler that generates separate function signatures for every usage of a function internally at compile-time (so if you have a function: "fac(N) -> N * fac(N-1).", then a method signature is generated internally at compile optimized for numbers, since that's the only thing that fit the function body)[/quote]And there's no solid definition of what a dynamic language is; it is generally defined as a language that performs alot of operations during runtime instead of during compilation, and generally is self-modifying in some way (dynamic types, self-modifying code etc).[/quote]
What Erlang does is more pattern-matching. HiPE does some weird stuff to optimise it, but it isn't based on types too much. Atoms usually carry type information in Erlang, but I don't really now that much about it.
Yes, it isn't very well defined what exactly a dynamic language is. But Erlang is widely regarded as one.

[quote name='dflemstr' date='Apr 2 2009, 10:01 PM' post='716052']
[quote name='sindbad' post='715938' date='Apr 2 2009, 03:05 PM']
4. No they don't. Yes you can, it's just optional.[/quote]
What is optional? Didn't understand your answer. My example that I gave seems plausible, to me.[/quote]
You CAN check for types in python if you want to. You can check for a very specific part of an inheritance tree if you want to. Stuff like isinstance(str, mything)
But it's optional by default. There has been some brainstorming by the BDFL to put optional type signatures on function calls, but nothing was really implemented.
The key word here is optional. Most of the time I don't want to check for the type of some thing and in static languages I end up fighting the type system for some control. You only need such precise error checking for objects for the interface to a library and even then it's not useful for all callable functions.

[quote name='dflemstr' date='Apr 2 2009, 10:01 PM' post='716052']
[quote name='sindbad' post='715938' date='Apr 2 2009, 03:05 PM']
5. Proper lambda? Are you talking about python's expression-only lambda? That's simply a design choice. You can define functions anywhere, if you do need lambda-like stuff. Look at how decorators work. Also look at Ruby. [/quote]
I mean that you can have function types that are pure data types. Like this scala code:CODE
val myFunc = (x: Int, y: Int) => x + y - x * y
val anotherFunc = myFunc(_, 2) //here, a new function is created that takes 1 argument of type int and calls myFunc(_, 2) when called, where _ = its argument
println(myFunc(2, 3))
println(anotherFunc(5))
Or, another piece of code; this can be very error-prone and inefficient in Python with lambdas:CODE
def someFunction(x: Int, f: (Int, Int) => Int): Int = {
f(x, x) //f is of type "function that takes 2 ints and returns one int". It's called here, andit's return value is returned.}
[/quote]Functions are higher order and they are objects, just like everything else in python. You can assign them to other things and reassign them and in general do just about anything. They're just declared differently. For anything non-trivial, pythonistas use defs, not lambdas. That's how decorators return functions that to complex stuff. Also, look at Ruby blocks.

[quote name='dflemstr' date='Apr 2 2009, 10:01 PM' post='716052']And no, I'm not confusing anything ;)
A type in a language, to be called 'strong', should have a defined set of operations that can be done with it, and a defined set of cases it can be used in. In Scala, everything is strong down to the bone.
A statement like "3 + 2" actually evaluates to "(3).+(2)". There's a class called Int that has a method called "+" that has many overloads for various types that you can add to it. This is expressed in source-code and not part of the syntax. You could theoretically extend the Int class and override it's operator if it weren't sealed.
Then, when you make a method of a class in Scala (or a first-level function), that method also gains a type; for example "def function(x: Int) = ..." would have the type "Function1[Int]", aka a 1-parameter function with the type parameter Int, meaning that it's first parameter must be of type Int. You can store this in a variable etc and use it as data. This includes all functions, even the native ones. A function, since it has a type, can of course be extended by a class or another function...
Then, there are no "syntax functions" in Scala like there is in Python. Like, "print" for example.
This kind of type-strength just hasn't been seen so far in dynamic languages, including Python, afaik. Or could you please give me the function signature of "print" in Python? :p
[/quote]
It's entirely possible to override the __add__ method in python, just so you know. Expressed part of the code, not syntax.
Actually, most of the syntax in python is just syntactic sugar. The print statement just calls a __print__ internal (or something like that, I'm not sure about it's name). Note that print becomes a function in 3.0 so it's more easily overridden with logging and the like.
It's like that mostly because it makes common patterns very easy.

In python, function signatures exist as a convention. You can find them in the __doc__ of each function.
 
Last edited by a moderator:
sindbad said:
You're confusing type safety with pattern matching. I know they can be related, but that's not the point. Look at Erlang :p
No, I don't. In Scala, they are almost the same (and therefore I used them together). What I mean is that type safety can be done right by fusing it with type matching (many languages have done this, by having type parameters/templates combined with types.)
sindbad said:
I really haven't noticed problems while using IDEs, so it mustn't be that hard.


Yeah, ok, don't optimize things that don't need optimization. Don't chose faster technology if the slower technology suffices. That seems to be the general opinion among developers recently. I do not agree.

sindbad said:
1. It's quite effective in most static languages, but even there you find problems.
2. Inheritance, polymorphism, interfaces and a few other tricks can really screw this up. It's only true in C.
3. I'm not familiar enough with the implementations of functional languages. I do know that lisp and Erlang are dynamic languages and they handle this just fine. And fast.

Really, types and signatures are one of the many ways to optimise code for speed and memory. It's not even the lowest hanging fruit, a good GC usually has a larger impact.


1. Yes, as with all optimizations
2. How? By using the kind of "linked list" structure that is used for inheritance, nothing serious can happen.
3. Yes, because in those languages, types are immutable. Lisp only has one set of types: lists of length 1, lists of length 2 etc. The types can't change and no new types can be introduced. Erlang behaves similarly. Therefore, those two languages are typed in a way, since they don't only remove type specifications when declaring stuff, but also reduce the number of types to a finite amount, thereby basically forcing a variable to have a specific type anyways.

sindbad said:
What Erlang does is more pattern-matching. HiPE does some weird stuff to optimise it, but it isn't based on types too much. Atoms usually carry type information in Erlang, but I don't really now that much about it.


Yes, exactly. Erlang only has a list type, an atom type and a number type (there are other things also, but only types with atomic values). Further strengthens my previous point.
sindbad said:
You CAN check for types in python if you want to.


Yes, but you don't. There's therefore a seciruty flaw that can be exploited in some pieces of code.
sindbad said:
Functions are higher order and they are objects, just like everything else in python.


But they aren't objects in a sense that pure object-oriented languages require. Or is it possible to extend a function? Can a function inherit a class or interface? No.

sindbad said:
Actually, most of the syntax in python is just syntactic sugar.


Yes, Python kind of goes in the right direction. It is the best of the dynamic languages imho. Still, how would you introduce a new operator in Python (in code only)? Difficult.

sindbad said:
In python, function signatures exist as a convention. You can find them in the __doc__ of each function.

Yes, but they can lie, they aren't checked by the runtime/interpreter and they are in no way contributing to the benefits of having a function signature that can be read by the compiler/runtime.
 
Last edited by a moderator:
[quote name='dflemstr' date='Apr 3 2009, 03:06 PM' post='716345']
[quote name='sindbad' post='716298' date='Apr 3 2009, 09:49 AM']
You're confusing type safety with pattern matching. I know they can be related, but that's not the point. Look at Erlang :p
[/quote]
No, I don't. In Scala, they are almost the same (and therefore I used them together). What I mean is that type safety can be done right by fusing it with type matching (many languages have done this, by having type parameters/templates combined with types.)
[/quote]That's just a way to go around the limitations of static types.

[quote name='dflemstr' date='Apr 3 2009, 03:06 PM' post='716345']
[quote name='sindbad' post='716298' date='Apr 3 2009, 09:49 AM']
I really haven't noticed problems while using IDEs, so it mustn't be that hard.
[/quote]
Yeah, ok, don't optimize things that don't need optimization. Don't chose faster technology if the slower technology suffices. That seems to be the general opinion among developers recently. I do not agree.
[/quote]Why? It's a perfectly good attitude as long as you DO optimise stuff that ends up being slow. Optimising things just for the sake of being meticulous is wasting time.

[quote name='dflemstr' date='Apr 3 2009, 03:06 PM' post='716345']
[quote name='sindbad' post='716298' date='Apr 3 2009, 09:49 AM']
1. It's quite effective in most static languages, but even there you find problems.
2. Inheritance, polymorphism, interfaces and a few other tricks can really screw this up. It's only true in C.
3. I'm not familiar enough with the implementations of functional languages. I do know that lisp and Erlang are dynamic languages and they handle this just fine. And fast.

Really, types and signatures are one of the many ways to optimise code for speed and memory. It's not even the lowest hanging fruit, a good GC usually has a larger impact.
[/quote]
1. Yes, as with all optimizations
2. How? By using the kind of "linked list" structure that is used for inheritance, nothing serious can happen.
3. Yes, because in those languages, types are immutable. Lisp only has one set of types: lists of length 1, lists of length 2 etc. The types can't change and no new types can be introduced. Erlang behaves similarly. Therefore, those two languages are typed in a way, since they don't only remove type specifications when declaring stuff, but also reduce the number of types to a finite amount, thereby basically forcing a variable to have a specific type anyways.
[/quote]
2. Screw it up as in make it the optimisations not effective. It does happen a lot for Java, less so for Scala.

[quote name='dflemstr' date='Apr 3 2009, 03:06 PM' post='716345']
[quote name='sindbad' post='716298' date='Apr 3 2009, 09:49 AM']
You CAN check for types in python if you want to.
[/quote]
Yes, but you don't. There's therefore a seciruty flaw that can be exploited in some pieces of code.
[/quote]
It's bad practice not to check stuff coming from user-facing interfaces (including the case of developers using your library). I do all the time.
Your tests are supposed to check for such holes and find at least the obvious ones. The less obvious tend to appear just as much in static languages, even ones with bound types.

[quote name='dflemstr' date='Apr 3 2009, 03:06 PM' post='716345']
[quote name='sindbad' post='716298' date='Apr 3 2009, 09:49 AM']
Functions are higher order and they are objects, just like everything else in python.
[/quote]
But they aren't objects in a sense that pure object-oriented languages require. Or is it possible to extend a function? Can a function inherit a class or interface? No.
[/quote]Yes, you can inherit from a class. Functions are just objects with a __call__ method. It's entirely up to you what happens inside that method.

[quote name='dflemstr' date='Apr 3 2009, 03:06 PM' post='716345']
[quote name='sindbad' post='716298' date='Apr 3 2009, 09:49 AM']
Actually, most of the syntax in python is just syntactic sugar.
[/quote]
Yes, Python kind of goes in the right direction. It is the best of the dynamic languages imho. Still, how would you introduce a new operator in Python (in code only)? Difficult.
[/quote]Is there a need for a new operator in common code? If there is, I'm sure the BDFL (Guido) will make sure it gets in in the next version. He's been delightfully good at predicting our needs so far, it's called "Guido's time machine".

[quote name='dflemstr' date='Apr 3 2009, 03:06 PM' post='716345']
[quote name='sindbad' post='716298' date='Apr 3 2009, 09:49 AM']
In python, function signatures exist as a convention. You can find them in the __doc__ of each function. [/quote]
Yes, but they can lie, they aren't checked by the runtime/interpreter and they are in no way contributing to the benefits of having a function signature that can be read by the compiler/runtime.
[/quote][/quote]
But it can be read by the runtime, that's how IDEs provide call tips.
Being certain of a function's calling signature doesn't tell you anything about the code inside it. If you want to know what it does, you have to look at the code.

I guess the point I'm trying to make is that there are very well thought out dynamic languages, like Scheme, Python, Erlang (you should see some of the magic people do with it, it's types are not as restricted as you think), Scheme (look up CLOS), Ruby and Smalltalk (btw, you should look into Smalltalk, it's very interesting). JavaScript could be on the list if I could magically remove things from the language without breaking backwards compatibility.

Scala is unconventional. It's weird and has large upfront cost. I do recognise many very good design choices in it, but it restricts me way too much. In my eyes, it's only saving quality is that it's a functional language; the same way I like Haskell, but I'd prefer never to have to write code in it.

Python gets a lot of things right. In fact, it gets most things right in a practical, yet very clean way. It does have it's warts, many of which get removed in 3.0. It does have things that look less clean ( int("2") as opposed to "2".int()), but it's general readability is amazing. And that's about the only thing it enforces. All other things called "pythonic" have mostly to do with conventions that follow the Zen of Python (try import this). Breaking these conventions doesn't really the person doing so, but it will make their code harder to read, especially by themselves.

PS: Sorry about the messed up quotes, I just realised now.
 
Last edited by a moderator:
sindbad said:
Why? It's a perfectly good attitude as long as you DO optimise stuff that ends up being slow. Optimising things just for the sake of being meticulous is wasting time.
Thats how the Vista devs probably thought. OK, I won't go any further in that direction :p
sindbad said:
2. Screw it up as in make it the optimisations not effective. It does happen a lot for Java, less so for Scala.


The both languages use the same JVMs so issues in one language typically exist in both :p
As I said, Scala is a good language on bad foundations; can't do alot about that, tho.
All optimizations have negative effects aswell. That's why you should never ever use -O3 for some programs in gcc, for example. That's just how life is.

sindbad said:
Your tests are supposed to check for such holes and find at least the obvious ones. The less obvious tend to appear just as much in static languages, even ones with bound types.
Agreed. To be able to match a function parameter and have the compiler save you a few lines of unit tests is preferred by me, however.

sindbad said:
Yes, you can inherit from a class. Functions are just objects with a __call__ method. It's entirely up to you what happens inside that method.

That is very nice, and like it is in Scala, actually. (In Scala, the method is called "apply", and is used by Arrays as well, for example, to allow "myarray(index)"... I like languages that build on a small core, I already told you ;) )
sindbad said:
Is there a need for a new operator in common code? If there is, I'm sure the BDFL (Guido) will make sure it gets in in the next version. He's been delightfully good at predicting our needs so far, it's called "Guido's time machine".

What I meant was that your own objects (in your own libraries) should be able to define their own operators, allowing you to make domain-specific code "syntax" alterations to shorten down code and make it alot more understandable.

Example: Here's a (immutable) Rational class that represents a rational scalar in code (for precision calculations etc):
CODE

class Rational(n: Int, d: Int) {
require(d != 0)

private val g = gcd(n.abs, d.abs)
val numer = n / g
val denom = d / g

def this(n: Int) = this(n, 1)

def + (that: Rational): Rational =
new Rational(
numer * that.denom + that.numer * denom,
denom * that.denom
)
def * (that: Rational): Rational =
new Rational(numer * that.numer, denom * that.denom)

override def toString = numer +"/"+ denom
private def gcd(a: Int, b: Int): Int =
if (b == 0) a else gcd(b, a % b)
}
Now, you can write things like "new Rational(4, 6) + new Rational(5, 7)" and have it behave correctly.
(there's more to this class than I've written here; if you extend it further it supports things like "(4, 6) + (5, 7)" too)

sindbad said:
But it can be read by the runtime, that's how IDEs provide call tips.
Being certain of a function's calling signature doesn't tell you anything about the code inside it. If you want to know what it does, you have to look at the code.

What I ment was that the runtime can't check if you make a valid call by reading the .__help__ field.
And the way some functions are written in Python, you can't even tell what kind of variable type they expect even by reading their (often incomplete) help and looking at the code itself. Types are a great help here.

sindbad said:
I guess the point I'm trying to make is that there are very well thought out dynamic languages, like Scheme, Python, Erlang (you should see some of the magic people do with it, it's types are not as restricted as you think), Scheme (look up CLOS), Ruby and Smalltalk (btw, you should look into Smalltalk, it's very interesting).

I am very itnerested in programming languages as a whole. Many of them have gimmicks and extremely good, well thought-out features. I used to program in Erlang (not professionally, mind, but seriously for some oss formula parsing program) so I know what I'm talking about (most of the time; it was a while ago). I also try to try out all languages I can come across; I even wrote an interpreter for Befunge once (it's trivial; it's the compiling that's difficult, but still, you gain some knowledge of the language in-depth by writing an interpreter.)

What appeals to me with Scala is that it combines all paradigms I've ever seen into one language. It's not the "best" language I've seen, it is just very well engineered and has some very interesting aspects, such as DSL capabilities, ease of use AND 'depth', integration with Java so that you can use Java's libraries, run Scala programs on JVMs and therefore write anything from CLI-tools to scripts to 3D-games to high-performance web servers (much like Python, really :p ), it's extreme concurrency capabilities (à la Erlang) and through that optimizations for multi-core processors etc.

I haven't found another language that can be so "one size fits all", but like all such things, there are bad thigns about them too.
Scala was developed as a proof of concept that oop and fop could be combined, and has grown from that. It is one of the languages that could one day become Java 2.0. That's why it interests me so much.
EDIT: Rational class code taken from the excellent book "Scala Programming" by Martin Odersky, Lex Spoon and Bill Venners. I should have the right to take it, if not, please tell me and I will remove it.
 
Last edited by a moderator:
DSLs are an entirely different subject. I personally think they're a solution looking for a problem most of the time. They can be useful, but not usually. If you like those, look at Ruby.

You can do the same trick in python, with __add__. Rational(2) + Rational(3). Same for any operator. There's a guy that overloaded | for his message passing library so it looks more like ! in Erlang.
The trick wouldn't work for the contents of a regular tuple without changing the behaviour of all tuples. Possible, but not recommended. The pythonic way would be to make the constructor of Rational take in either several arguments or a tuple. Or even named arguments. Or all of them.
CODE

Rational(1, 2)
Rational( (1, 2) )
Rational(num=1, denom=2)
 
Last edited by a moderator:
sindbad said:
DSLs are an entirely different subject. I personally think they're a solution looking for a problem most of the time. They can be useful, but not usually. If you like those, look at Ruby.


That's why I said that I like scala, it has all the features I've ever seen in other languages (Compare Ruby on Rails to Lift (formerly Scala on Sails), a scala library that makes web serving EASY (after getting used to it, of course)).
sindbad said:
You can do the same trick in python, with __add__. Rational(2) + Rational(3). Same for any operator. There's a guy that overloaded | for his message passing library so it looks more like ! in Erlang.
In scala, you can of course do things like "class lol { def #%&(that: Int) = whatever }" and introduce any operator. You can also make unary operators like "class lol { def unary_!!( that: Float) = foo }"...
It gives you a freedom unattainable in other languages.

Anyways, it was great having this small discussion, but I'm signing off now for an hour or so, so don't expect any rapid answers to anything you post :p
 
Last edited by a moderator:
Back
Top